About Me
I am Yulai, a 3rd-year Ph.D. candidate at Princeton working on machine learning.
My research primarily explores modern reinforcement learning and diffusion models both theoretically and experimentally. I am particularly interested in solving challenging scientific problems through data-driven approaches.
Experiences
- Research Intern @ BRAID (Biology Research | AI Development), Research & Early Development, Genentech
- Ph.D. Student @ Electrical and Computer Engineering, Princeton University
- Visiting Student @ Institute for Machine Learning, ETH Zürich
- Research Assistant @ Computer Science & Engineering, University of Washington
- Bachelor @ Electronic Engineering, Tsinghua University
Publications
* denotes equal contribution or alphabetical ordering.
Reward-Guided Refinement in Diffusion Models With Applications to Protein and DNA Design
Masatoshi Uehara, Xingyu Su, Yulai Zhao, Xiner Li, Aviv Regev, Shuiwang Ji, Sergey Levine, Tommaso Biancalani
International Conference on Machine Learning (ICML) 2025
[arXiv]Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review
Masatoshi Uehara, Yulai Zhao, Chenyu Wang, Xiner Li, Aviv Regev, Sergey Levine, Tommaso Biancalani
[arXiv]Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding
Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, Masatoshi Uehara
[arXiv]Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review
Masatoshi Uehara*, Yulai Zhao*, Tommaso Biancalani, Sergey Levine
[arXiv] [GitHub]Adding Conditional Control to Diffusion Models with Reinforcement Learning
Yulai Zhao*, Masatoshi Uehara*, Gabriele Scalia, Sunyuan Kung, Tommaso Biancalani, Sergey Levine, Ehsan Hajiramezanali
International Conference on Learning Representations (ICLR) 2025
[paper] [arXiv] [GitHub]Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
Masatoshi Uehara*, Yulai Zhao*, Ehsan Hajiramezanali, Gabriele Scalia, Gökcen Eraslan, Avantika Lal, Sergey Levine, Tommaso Biancalani
Conference on Neural Information Processing Systems (NeurIPS) 2024
[arXiv]Feedback Efficient Online Fine-Tuning of Diffusion Models
Masatoshi Uehara*, Yulai Zhao*, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Sergey Levine, Tommaso Biancalani
International Conference on Machine Learning (ICML) 2024
[paper] [arXiv] [GitHub]Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Masatoshi Uehara*, Yulai Zhao*, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Tommaso Biancalani, Sergey Levine
[arXiv]Provably Efficient CVaR RL in Low-rank MDPs
Yulai Zhao*, Wenhao Zhan*, Xiaoyan Hu*, Ho-fung Leung, Farzan Farnia, Wen Sun, Jason D. Lee
International Conference on Learning Representations (ICLR) 2024
[paper] [arXiv]Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning
Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee
International Conference on Machine Learning (ICML) 2023
[paper] [arXiv]Blessing of Class Diversity in Pre-training
Yulai Zhao, Jianshu Chen, Simon S. Du
International Conference on Artificial Intelligence and Statistics (AISTATS) 2023
[Oral Presentation] [Notable Paper]
[paper] [arXiv]Optimizing the Performative Risk under Weak Convexity Assumptions
Yulai Zhao
NeurIPS 2022 Workshop on Optimization for Machine Learning
[paper] [arXiv]Provably Efficient Policy Optimization for Two-Player Zero-Sum Markov Games
Yulai Zhao, Yuandong Tian, Jason D. Lee, Simon S. Du
International Conference on Artificial Intelligence and Statistics (AISTATS) 2022
[paper] [arXiv]