Davis Liang
Davis Liang
Research Scientist, Abridge AI (Prev: Meta AI, Amazon AI)
Email confirmado em - Página inicial
Citado por
Citado por
Masked Language Model Scoring
J Salazar, D Liang, TQ Nguyen, K Kirchhoff
ACL 2020, 2019
Improve Transformer Models with Better Relative Position Embeddings
Z Huang, D Liang, P Xu, B Xiang
Findings of EMNLP 2020, 2020
Learning noise-invariant representations for robust speech recognition
D Liang, Z Huang, ZC Lipton
IEEE SLT 2018, 2018
Embedding-based zero-shot retrieval through query generation
D Liang, P Xu, S Shakeri, CN Santos, R Nallapati, Z Huang, B Xiang
arXiv preprint arXiv:2009.10270, 2020
TRANS-BLSTM: Transformer with bidirectional LSTM for language understanding
Z Huang, P Xu, D Liang, A Mishra, B Xiang
arXiv preprint arXiv:2003.07000, 2020
Decoding and diversity in machine translation
N Roberts, D Liang, G Neubig, ZC Lipton
arXiv preprint arXiv:2011.13477, 2020
Xlm-v: Overcoming the vocabulary bottleneck in multilingual masked language models
D Liang, H Gonen, Y Mao, R Hou, N Goyal, M Ghazvininejad, ...
arXiv preprint arXiv:2301.10472, 2023
Deep Automated Multi-task Learning
D Liang, Y Shu
IJCNLP 2017, 2017
Attention-guided generative models for extractive question answering
P Xu, D Liang, Z Huang, B Xiang
arXiv preprint arXiv:2110.06393, 2021
Invariant representation learning for robust deep networks
J Salazar, D Liang, Z Huang, Z Lipton
Adaptable Claim Rewriting with Offline Reinforcement Learning for Effective Misinformation Discovery
A Kazemi, A Abzaliev, N Deng, R Hou, D Liang, SA Hale, V Pérez-Rosas, ...
arXiv preprint arXiv:2210.07467, 2022
Automated Multi-task Learning
D Liang
University of California, San Diego, 2017
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
L Bandarkar, D Liang, B Muller, M Artetxe, SN Shukla, D Husa, N Goyal, ...
arXiv preprint arXiv:2308.16884, 2023
Generating Hashtags for Short-form Videos with Guided Signals
T Yu, H Yu, D Liang, Y Mao, S Nie, PY Huang, M Khabsa, P Fung, ...
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models
H Lee, R Hou, J Kim, D Liang, SJ Hwang, A Min
arXiv preprint arXiv:2305.18239, 2023
Robustifying Language Models via Adversarial Training with Masked Gradient
J Kim, Y Mao, R Hou, H Yu, D Liang, P Fung, Q Wang, M Khabsa
Multiplicative Position-aware Transformer Models for Language Understanding
Z Huang, D Liang, P Xu, B Xiang
arXiv preprint arXiv:2109.12788, 2021
O sistema não pode efectuar a operação agora. Tente mais tarde.
Artigos 1–17