Ziniu Li
About meI am a Ph.D. student at The Chinese University of Hong Kong, Shenzhen (CUHKSZ), advised by Prof. Zhi-Quan (Tom) Luo. I am interested in artificial intelligence, especially reinforcement learning and large language models. I have worked/interned at Tencent, Nanjing University, Cardinal Operations, etc. Feel free to contact me if you want to discuss some ideas. Research StatementMy research focuses on the algorithm design and theoretical analysis of machine learning models, particularly in reinforcement learning. Currently, I am primarily working on large language models to pursue the development of an agent capable of universal intelligence. My work has been recognized with several honors, including Best-Paper Runner-Up (at NeurIPS 2024 FITML Workshop), Oral Presentation (ICLR2024 Tiny Paper Track, UAI 2023, NeurIPS 2021 EcoRL Workshop), and Spotlight Presentation (NeurIPS 2023). In the field of large language models, my work spans several key areas: data selection (NeurIPS 2023 Spotlight), diversity-preserving supervised fine-tuning (ICLR 2025, NeurIPS 2024 FITML Workshop Best Paper Runner-up), generalization of RLHF (ICLR 2024 Tiny Paper Oral), computationally efficient RLHF (ICML 2024), and hallucination mitigation (ICLR 2025). In the field of imitation learning and reinforcement learning, I am interested in the theory of sample complexity (NeurIPS 2020, TPAMI 2021, UAI 2023 Oral), efficient exploration (ICLR 2022, NeurIPS 2021 EcoRL Workshop Oral, DAI 2020), as well as applications in robotics (ICLR 2024 Blog) and signal processing (TSP 2024). I also work on optimization-centric topics with other researchers, including understanding Adam in training Transformers (NeurIPS 2024), memory-efficient optimizers (ICLR 2025), zero-order optimization (IJCAI 2020), and prompt-tuning (EMNLP 2024). Recent Highlights*: indicating equal contribution or alphabetic ordering. Preserving Diversity in Supervised Fine-tuning of Large Language Models TL;DR: This work introduces a game-theoretic distribution matching method to address the diversity-reducing and knowledge-forgetting issues in SFT ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models TL;DR: This work shows that PPO overshoots for RLHF in LLMs and introduces ReMax, which requires half the memory of PPO and runs twice as fast When is RL better than DPO in RLHF? A Representation and Optimization Perspective TL;DR: This work analyzes the reward modeling quality in view of representations and the optimization error sources Imitation Learning from Imperfection: Theoretical Justifications and Algorithms TL;DR: This work validates that importance sampling is effective in data selection when leveraging multiple imperfect (out-of-distribution and low-quality) data sources ServiceReviewerNeurIPS (Top Reviewer), ICML (Outstanding Reviewer), ICLR (Highlighted Reviewer). Teaching Assistant
Lecturer
Award
|