Ruijie Zheng

Me.jpeg

I am a second-year Ph.D. student in Computer Science at the University of Maryland, College Park, where I am fortunate to be advised by Prof. Furong Huang and Prof. Hal Daumé III. Before that, I obtained my Bachelor’s degree in Computer Science and Mathematics with high honors also from the University of Maryland, College Park. My research spans a variety of topics in sequential decision making/reinforcement learning (RL), including multitask offline pretraining (foundational model for sequential decision making), representation learning in visual RL, model-based RL, adversarial RL, etc. My long-term goal is to develop a generally capable, robust, and self-adaptive embodied agent, endowed with extensive prior knowledge from a broad spectrum of structured and unstructured data.

In visual RL, I developed a temporal contrastive representation learning mechanism, TACO that simultaneously learn state and action representations for online and offline visual RL algorithms. Building on top of TACO, Premier-TACO scales up to large-scale multitask offline pretraining, learning a universal visual representation for efficient adaptation to new tasks with few-shot imitation learning. Additionally, another of my recent work DrM pioneers the first visual RL algorithm mastering a diverse range of complex locomotion and manipulation tasks through the concept of dormant ratio.

Beyond visuo-motor policy learning, I have also worked on efficient model-based RL, transfer-RL across different observation spaces, and adversarial RL to make policy robust against observation and communication attacks.

selected publications

  1. Preprint
    PRISE: Learning Temporal Action Abstractions as a Sequence Compression Problem
    Ruijie Zheng , Ching-An Cheng , Hal Daumé III , and 2 more authors
    In Preprint. The short version is presented as spotlight talk at CoRL 2023 Pre-Training for Robot Learning Workshop , 2024
  2. Preprint
    Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss
    Ruijie Zheng , Yongyuan Liang , Xiyao Wang , and 7 more authors
    In Preprint. Accepted at NeurIPS 2023 Foundation Models for Decision Making Workshop , 2024
  3. ICLR 2024
    DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
    Guowei* Xu , Ruijie* Zheng , Yongyuan* Liang , and 12 more authors
    In International Conference on Learning Representations (Spotlight (5%)) , 2024
  4. NeurIPS 2023
    TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning
    Ruijie Zheng , Xiyao Wang , Yanchao Sun , and 5 more authors
    In Advances in Neural Information Processing Systems , 2023
  5. ICLR 2023
    Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function
    Ruijie Zheng , Xiyao Wang , Huazhe Xu , and 1 more author
    In International Conference on Learning Representations , 2023
  6. ICLR 2023
    Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication
    Yanchao Sun , Ruijie Zheng , Parisa Hassanzadeh , and 4 more authors
    In International Conference on Learning Representations , 2023
  7. ICLR 2022
    Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL
    Yanchao Sun , Ruijie Zheng , Yongyuan Liang , and 1 more author
    In International Conference on Learning Representations , 2022
  8. ICLR 2022
    Transfer RL across Observation Feature Spaces via Model-Based Regularization
    Yanchao Sun , Ruijie Zheng , Xiyao Wang , and 2 more authors
    In International Conference on Learning Representations , 2022