Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 1583 | 2023 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ... arXiv preprint arXiv:2403.05530, 2024 | 395 | 2024 |
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models A Prasad, P Hase, X Zhou, M Bansal arXiv preprint arXiv:2203.07281, 2022 | 141 | 2022 |
What Can We Learn from Collective Human Opinions on Natural Language Inference Data? Y Nie, X Zhou, M Bansal arXiv preprint arXiv:2010.03532, 2020 | 118 | 2020 |
Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning L Chen, X Zhou, C Chang, R Yang, K Yu Proceedings of the 2017 Conference on Empirical Methods in Natural Language …, 2017 | 48 | 2017 |
Towards Robustifying NLI Models Against Lexical Dataset Biases X Zhou, M Bansal arXiv preprint arXiv:2005.04732, 2020 | 46 | 2020 |
The Curse of Performance Instability in Analysis Datasets: Consequences, Source, and Suggestions X Zhou, Y Nie, H Tan, M Bansal arXiv preprint arXiv:2004.13606, 2020 | 42 | 2020 |
Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning X Zhou, Y Nie, M Bansal arXiv preprint arXiv:2104.08676, 2021 | 29 | 2021 |
ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness A Prasad, S Saha, X Zhou, M Bansal arXiv preprint arXiv:2304.10703, 2023 | 27 | 2023 |
On-line dialogue policy learning with companion teaching L Chen, R Yang, C Chang, Z Ye, X Zhou, K Yu Proceedings of the 15th Conference of the European Chapter of the …, 2017 | 18 | 2017 |
Affordable on-line dialogue policy learning C Chang, R Yang, L Chen, X Zhou, K Yu Proceedings of the 2017 Conference on Empirical Methods in Natural Language …, 2017 | 16 | 2017 |
Hidden Biases in Unreliable News Detection Datasets X Zhou, H Elfardy, C Christodoulopoulos, T Butler, M Bansal EACL, 2021 | 13 | 2021 |
Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality Y Jiang, X Zhou, M Bansal arXiv preprint arXiv:2211.15578, 2022 | 7 | 2022 |
Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs? P Hase, T Hofweber, X Zhou, E Stengel-Eskin, M Bansal arXiv preprint arXiv:2406.19354, 2024 | 4 | 2024 |
Data Factors for Better Compositional Generalization X Zhou, Y Jiang, M Bansal arXiv preprint arXiv:2311.04420, 2023 | 3 | 2023 |
Masked Part-Of-Speech Model: Does Modeling Long Context Help Unsupervised POS-tagging? X Zhou, S Zhang, M Bansal arXiv preprint arXiv:2206.14969, 2022 | 2 | 2022 |
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings Y Jiang, X Zhou, M Bansal arXiv preprint arXiv:2402.06492, 2024 | 1 | 2024 |
Can Sequence-to-Sequence Transformers Naturally Understand Sequential Instructions? X Zhou, A Gupta, S Upadhyay, M Bansal, M Faruqui Proceedings of the 12th Joint Conference on Lexical and Computational …, 2023 | | 2023 |