Seguir
Zhe Wang
Zhe Wang
Email confirmado em osu.edu
Título
Citado por
Citado por
Ano
SpiderBoost and momentum: Faster variance reduction algorithms
Z Wang, K Ji, Y Zhou, Y Liang, V Tarokh
arXiv preprint arXiv:1810.10690, 2018
229*2018
Improving sample complexity bounds for (natural) actor-critic algorithms
T Xu, Z Wang, Y Liang
Advances in Neural Information Processing Systems 33, 4358-4369, 2020
1022020
Improved zeroth-order variance reduced algorithms and analysis for nonconvex optimization
K Ji, Z Wang, Y Zhou, Y Liang
International conference on machine learning, 3100-3109, 2019
602019
Non-asymptotic convergence analysis of two time-scale (natural) actor-critic algorithms
T Xu, Z Wang, Y Liang
arXiv preprint arXiv:2005.03557, 2020
582020
Stochastic variance-reduced cubic regularization for nonconvex optimization
Z Wang, Y Zhou, Y Liang, G Lan
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
542019
Reanalysis of variance reduced temporal difference learning
T Xu, Z Wang, Y Zhou, Y Liang
arXiv preprint arXiv:2001.01898, 2020
452020
Cubic regularization with momentum for nonconvex optimization
Z Wang, Y Zhou, Y Liang, G Lan
Uncertainty in Artificial Intelligence, 313-322, 2020
272020
Gradient free minimax optimization: Variance reduction and faster convergence
T Xu, Z Wang, Y Liang, HV Poor
arXiv preprint arXiv:2006.09361, 2020
232020
Spectral algorithms for community detection in directed networks
Z Wang, Y Liang, P Ji
Journal of Machine Learning Research, 2020
222020
Convergence of cubic regularization for nonconvex optimization under KL property
Y Zhou, Z Wang, Y Liang
Advances in Neural Information Processing Systems 31, 2018
222018
Enhanced first and zeroth order variance reduced algorithms for min-max optimization
T Xu, Z Wang, Y Liang, HV Poor
212020
History-gradient aided batch size adaptation for variance reduced algorithms
K Ji, Z Wang, B Weng, Y Zhou, W Zhang, Y Liang
International Conference on Machine Learning, 4762-4772, 2020
18*2020
A note on inexact gradient and Hessian conditions for cubic regularized Newton’s method
Z Wang, Y Zhou, Y Liang, G Lan
Operations Research Letters 47 (2), 146-149, 2019
15*2019
Momentum schemes with stochastic variance reduction for nonconvex composite optimization
Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh
arXiv preprint arXiv:1902.02715, 2019
132019
Proximal gradient algorithm with momentum and flexible parameter restart for nonconvex optimization
Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh
arXiv preprint arXiv:2002.11582, 2020
92020
ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
X Huang, R Xu, H Zhou, Z Wang, Z Liu, L Li
Proceedings of the AAAI Conference on Artificial Intelligence 35 (9), 7857-7864, 2021
12021
Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
X Huang, H Zhou, R Xu, Z Wang, L Li
arXiv preprint arXiv:2006.07037, 2020
2020
O sistema não pode efectuar a operação agora. Tente mais tarde.
Artigos 1–17