Glove: Global vectors for word representation J Pennington, R Socher, CD Manning Proceedings of the 2014 conference on empirical methods in natural language …, 2014 | 42973 | 2014 |
Semi-supervised recursive autoencoders for predicting sentiment distributions R Socher, J Pennington, EH Huang, AY Ng, CD Manning Proceedings of the 2011 conference on empirical methods in natural language …, 2011 | 1763 | 2011 |
Deep neural networks as gaussian processes J Lee, Y Bahri, R Novak, SS Schoenholz, J Pennington, J Sohl-Dickstein arXiv preprint arXiv:1711.00165, 2017 | 1274 | 2017 |
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection R Socher, EH Huang, J Pennington, CD Manning, AY Ng Advances in Neural Information Processing Systems 2011, 801--809, 2011 | 1168 | 2011 |
Wide neural networks of any depth evolve as linear models under gradient descent J Lee, L Xiao, S Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ... Advances in neural information processing systems 32, 2019 | 1100 | 2019 |
Sensitivity and generalization in neural networks: an empirical study R Novak, Y Bahri, DA Abolafia, J Pennington, J Sohl-Dickstein arXiv preprint arXiv:1802.08760, 2018 | 490 | 2018 |
Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks L Xiao, Y Bahri, J Sohl-Dickstein, S Schoenholz, J Pennington International Conference on Machine Learning, 5393-5402, 2018 | 372 | 2018 |
Bayesian deep convolutional networks with many channels are gaussian processes R Novak, L Xiao, J Lee, Y Bahri, G Yang, J Hron, DA Abolafia, ... arXiv preprint arXiv:1810.05148, 2018 | 370 | 2018 |
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice J Pennington, S Schoenholz, S Ganguli Advances in neural information processing systems 30, 2017 | 305 | 2017 |
Statistical mechanics of deep learning Y Bahri, J Kadmon, J Pennington, SS Schoenholz, J Sohl-Dickstein, ... Annual Review of Condensed Matter Physics 11 (1), 501-528, 2020 | 267 | 2020 |
Nonlinear random matrix theory for deep learning J Pennington, P Worah Advances in neural information processing systems 30, 2017 | 234 | 2017 |
Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) J Pennington, R Socher, C Manning GloVe: Global Vectors for Word Representation, 1532-1543, 2014 | 221 | 2014 |
Finite versus infinite neural networks: an empirical study J Lee, S Schoenholz, J Pennington, B Adlam, L Xiao, R Novak, ... Advances in Neural Information Processing Systems 33, 15156-15172, 2020 | 209 | 2020 |
Hexagon functions and the three-loop remainder function LJ Dixon, JM Drummond, M von Hippel, J Pennington Journal of High Energy Physics 2013 (12), 1-95, 2013 | 206 | 2013 |
A mean field theory of batch normalization G Yang, J Pennington, V Rao, J Sohl-Dickstein, SS Schoenholz arXiv preprint arXiv:1902.08129, 2019 | 203 | 2019 |
The emergence of spectral universality in deep networks J Pennington, S Schoenholz, S Ganguli International Conference on Artificial Intelligence and Statistics, 1924-1932, 2018 | 187 | 2018 |
The four-loop remainder function and multi-Regge behavior at NNLLA in planar = 4 super-Yang-Mills theory LJ Dixon, JM Drummond, C Duhr, J Pennington Journal of High Energy Physics 2014 (6), 1-59, 2014 | 186 | 2014 |
Geometry of neural network loss surfaces via random matrix theory J Pennington, Y Bahri International conference on machine learning, 2798-2806, 2017 | 166 | 2017 |
Single-valued harmonic polylogarithms and the multi-Regge limit LJ Dixon, C Duhr, J Pennington Journal of High Energy Physics 2012 (10), 1-68, 2012 | 154 | 2012 |
The neural tangent kernel in high dimensions: Triple descent and a multi-scale theory of generalization B Adlam, J Pennington International Conference on Machine Learning, 74-84, 2020 | 145 | 2020 |