Yi Luan
Yi Luan
Google Deepmind
Verified email at - Homepage
Cited by
Cited by
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction
Y Luan, L He, M Ostendorf, H Hajishirzi
Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2018, 2018
Entity, relation, and event extraction with contextualized span representations
D Wadden, U Wennberg, Y Luan, H Hajishirzi
Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2019., 2019
A general framework for information extraction using dynamic span graphs
Y Luan, D Wadden, L He, A Shah, M Ostendorf, H Hajishirzi
Proc. Conf. North American Assoc. for Computational Linguistics (NAACL), 2019., 2019
Sparse, Dense, and Attentional Representations for Text Retrieval
Y Luan, J Eisenstein, K Toutanova, M Collins
Transactions of the Association for Computational Linguistics 9, 329-345, 2021
Text generation from knowledge graphs with graph transformers
R Koncel-Kedziorski, D Bekal, Y Luan, M Lapata, H Hajishirzi
Proc. Conf. North American Assoc. for Computational Linguistics (NAACL), 2019, 2019
Large Dual Encoders Are Generalizable Retrievers
J Ni, C Qu, J Lu, Z Dai, GH Ábrego, J Ma, VY Zhao, Y Luan, KB Hall, ...
arXiv preprint arXiv:2112.07899, 2021
Promptagator: Few-shot Dense Retrieval From 8 Examples
Z Dai, VY Zhao, J Ma, Y Luan, J Ni, J Lu, A Bakalov, K Guu, KB Hall, ...
arXiv preprint arXiv:2209.11755, 2022
Scientific information extraction with semi-supervised neural tagging
Y Luan, M Ostendorf, H Hajishirzi
Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2017., 2017
Multi-task learning for speaker-role adaptation in neural conversation models
Y Luan, C Brockett, B Dolan, J Gao, M Galley
Proc. Joint Conference on Natural Language Processing (IJCNLP), 2017., 2017
LSTM based conversation models
Y Luan, Y Ji, M Ostendorf
Proc. Int. Workshop on Conversational Natural Language Processing (ConvNLP …, 2016
ASQA: Factoid Questions Meet Long-Form Answers
I Stelmakh, Y Luan, B Dhingra, MW Chang
arXiv preprint arXiv:2204.06092, 2022
Paperrobot: Incremental draft generation of scientific ideas
Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan
Proc. Annu. Meeting Assoc. for Computational Linguistics (ACL), 2019., 2019
Method for using a multi-scale recurrent neural network with pretraining for spoken language understanding tasks
S Watanabe, Y Luan, B Harsham
US Patent 9,607,616, 2017
CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning
Z Wu, Y Luan, H Rashkin, D Reitter, GS Tomar
arXiv preprint arXiv:2112.08558, 2021
Instruction-following evaluation for large language models
J Zhou, T Lu, S Mishra, S Brahma, S Basu, Y Luan, D Zhou, L Hou
arXiv preprint arXiv:2311.07911, 2023
Contextualized Representations Using Textual Encyclopedic Knowledge
M Joshi, K Lee, Y Luan, K Toutanova
arXiv preprint arXiv:2004.12006, 2020
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?
Y Chen, H Hu, Y Luan, H Sun, S Changpinyo, A Ritter, MW Chang
arXiv preprint arXiv:2302.11713, 2023
Efficient learning for spoken language understanding tasks with word embedding based pre-training
Y Luan, S Watanabe, B Harsham
Sixteenth Annual Conference of the International Speech Communication …, 2015
Recognition of stance strength and polarity in spontaneous speech
GA Levow, V Freeman, A Hrynkevich, M Ostendorf, R Wright, J Chan, ...
2014 IEEE Spoken Language Technology Workshop (SLT), 236-241, 2014
The system can't perform the operation now. Try again later.
Articles 1–20