Follow
Yu Bai
Yu Bai
OpenAI
Verified email at openai.com - Homepage
Title
Cited by
Cited by
Year
The landscape of empirical risk for nonconvex losses
S Mei, Y Bai, A Montanari
The Annals of Statistics 46 (6A), 2747-2774, 2018
3902018
Transformers as statisticians: Provable in-context learning with in-context algorithm selection
Y Bai, F Chen, H Wang, C Xiong, S Mei
Advances in neural information processing systems 36, 57125-57211, 2023
2162023
Provable self-play algorithms for competitive reinforcement learning
Y Bai, C Jin
International conference on machine learning, 551-560, 2020
1952020
Policy finetuning: Bridging sample-efficient offline and online reinforcement learning
T Xie, N Jiang, H Wang, C Xiong, Y Bai
Advances in neural information processing systems 34, 27395-27407, 2021
1942021
Near-Optimal Reinforcement Learning with Self-Play
Y Bai, C Jin, T Yu
Advances in Neural Information Processing Systems, 2020, 2020
1682020
Openai o1 system card
A Jaech, A Kalai, A Lerer, A Richardson, A El-Kishky, A Low, A Helyar, ...
arXiv preprint arXiv:2412.16720, 2024
1672024
A sharp analysis of model-based reinforcement learning with self-play
Q Liu, T Yu, Y Bai, C Jin
International Conference on Machine Learning, 7001-7010, 2021
1632021
Beyond linearization: On quadratic and higher-order approximation of wide neural networks
Y Bai, JD Lee
International Conference on Learning Representations (ICLR) 2020, 2019
1442019
Proxquant: Quantized neural networks via proximal operators
Y Bai, YX Wang, E Liberty
International Conference on Learning Representations (ICLR) 2019, 2018
1312018
Provably Efficient Q-Learning with Low Switching Cost
Y Bai, T Xie, N Jiang, YX Wang
Advances in Neural Information Processing Systems, 2019, 2019
1212019
When can we learn general-sum Markov games with a large number of players sample-efficiently?
Z Song, S Mei, Y Bai
International Conference on Learning Representations (ICLR) 2022, 2021
1192021
Near-optimal provable uniform convergence in offline policy evaluation for reinforcement learning
M Yin, Y Bai, YX Wang
International Conference on Artificial Intelligence and Statistics, 1567-1575, 2021
105*2021
Negative preference optimization: From catastrophic collapse to effective unlearning
R Zhang, L Lin, Y Bai, S Mei
arXiv preprint arXiv:2404.05868, 2024
992024
How important is the train-validation split in meta-learning?
Y Bai, M Chen, P Zhou, T Zhao, J Lee, S Kakade, H Wang, C Xiong
International Conference on Machine Learning, 543-553, 2021
922021
Approximability of discriminators implies diversity in GANs
Y Bai, T Ma, A Risteski
International Conference on Learning Representations (ICLR) 2019, 2018
902018
The role of coverage in online reinforcement learning
T Xie, DJ Foster, Y Bai, N Jiang, SM Kakade
arXiv preprint arXiv:2210.04157, 2022
882022
Sample-efficient learning of Stackelberg equilibria in general-sum games
Y Bai, C Jin, H Wang, C Xiong
Advances in Neural Information Processing Systems 34, 25799-25811, 2021
812021
Near-optimal offline reinforcement learning via double variance reduction
M Yin, Y Bai, YX Wang
Advances in neural information processing systems 34, 7677-7688, 2021
762021
Improved online conformal prediction via strongly adaptive online learning
A Bhatnagar, H Wang, C Xiong, Y Bai
International Conference on Machine Learning, 2337-2363, 2023
602023
Towards understanding hierarchical learning: Benefits of neural representations
M Chen, Y Bai, JD Lee, T Zhao, H Wang, C Xiong, R Socher
Advances in Neural Information Processing Systems, 2020, 2020
602020
The system can't perform the operation now. Try again later.
Articles 1–20