About Me
I am Yulai, a 3rd-year Ph.D. candidate at Princeton working on machine learning.
My research primarily explores modern reinforcement learning and diffusion models both theoretically and experimentally. I am particularly interested in solving challenging scientific problems through data-driven approaches.
Experiences
- Research Intern @ BRAID (Biology Research | AI Development), Research & Early Development, Genentech
- Ph.D. Student @ Electrical and Computer Engineering, Princeton University
- Visiting Student @ Institute for Machine Learning, ETH Zürich
- Research Assistant @ Computer Science & Engineering, University of Washington
- Bachelor @ Electronic Engineering, Tsinghua University
Publications
* denotes equal contribution or alphabetical ordering.
Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding
Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, Masatoshi Uehara
[arXiv]Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review
Masatoshi Uehara*, Yulai Zhao*, Tommaso Biancalani, Sergey Levine
[arXiv] [GitHub]Adding Conditional Control to Diffusion Models with Reinforcement Learning
Yulai Zhao*, Masatoshi Uehara*, Gabriele Scalia, Tommaso Biancalani, Sergey Levine, Ehsan Hajiramezanali
[arXiv]Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
Masatoshi Uehara*, Yulai Zhao*, Ehsan Hajiramezanali, Gabriele Scalia, Gökcen Eraslan, Avantika Lal, Sergey Levine, Tommaso Biancalani
Conference on Neural Information Processing Systems (NeurIPS) 2024
[arXiv]Feedback Efficient Online Fine-Tuning of Diffusion Models
Masatoshi Uehara*, Yulai Zhao*, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Sergey Levine, Tommaso Biancalani
International Conference on Machine Learning (ICML) 2024
[paper] [arXiv] [GitHub]Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Masatoshi Uehara*, Yulai Zhao*, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Tommaso Biancalani, Sergey Levine
[arXiv]Provably Efficient CVaR RL in Low-rank MDPs
Yulai Zhao*, Wenhao Zhan*, Xiaoyan Hu*, Ho-fung Leung, Farzan Farnia, Wen Sun, Jason D. Lee
International Conference on Learning Representations (ICLR) 2024
[paper] [arXiv]Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning
Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee
International Conference on Machine Learning (ICML) 2023
[paper] [arXiv]Blessing of Class Diversity in Pre-training
Yulai Zhao, Jianshu Chen, Simon S. Du
International Conference on Artificial Intelligence and Statistics (AISTATS) 2023
[Oral Presentation] [Notable Paper]
[paper] [arXiv]Optimizing the Performative Risk under Weak Convexity Assumptions
Yulai Zhao
NeurIPS 2022 Workshop on Optimization for Machine Learning
[paper] [arXiv]Provably Efficient Policy Optimization for Two-Player Zero-Sum Markov Games
Yulai Zhao, Yuandong Tian, Jason D. Lee, Simon S. Du
International Conference on Artificial Intelligence and Statistics (AISTATS) 2022
[paper] [arXiv]