Follow
TAO MENG
TAO MENG
Ph.D. student in Computer Science, University of California, Los Angeles
Verified email at g.ucla.edu - Homepage
Title
Cited by
Cited by
Year
SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics
D Yin, T Meng, KW Chang
ACL, 2020
1632020
On the paradox of learning to reason from data
H Zhang, LH Li, T Meng, KW Chang, GV Broeck
IJCAI, 2022
1052022
GEMNET: Effective Gated Gazetteer Representations for Recognizing Complex Entities in Low-context Input
T Meng, A Fang, O Rokhlenko, S Malmasi
NAACL, 2021
702021
Mitigating Gender Bias Amplification in Distribution by Posterior Regularization
S Jia, T Meng, J Zhao, KW Chang
ACL, 2020
492020
On the Robustness of Language Encoders against Grammatical Errors
F Yin, Q Long, T Meng, KW Chang
ACL, 2020
332020
Controllable Text Generation with Neurally-Decomposed Oracle
T Meng, S Lu, N Peng, KW Chang
NeurIPS, 2022
312022
Target language-aware constrained inference for cross-lingual dependency parsing
T Meng, N Peng, KW Chang
EMNLP, 2019
302019
Insnet: An efficient, flexible, and performant insertion-based text generation model
S Lu, T Meng, N Peng
NeurIPS, 2022
16*2022
An Integer Linear Programming Framework for Mining Constraints from Data
T Meng, KW Chang
ICML, 2021
112021
Monotonic paraphrasing improves generalization of language model prompting
Q Liu, F Wang, N Xu, T Yan, T Meng, M Chen
arXiv preprint arXiv:2403.16038, 2024
52024
Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification
T Meng, N Mehrabi, P Goyal, A Ramakrishna, A Galstyan, R Zemel, ...
arXiv preprint arXiv:2410.05559, 2024
2024
Control Large Language Models via Divide and Conquer
B Li, Y Wang, T Meng, KW Chang, N Peng
arXiv preprint arXiv:2410.04628, 2024
2024
Constrained Inference and Decoding for Controlling Natural Language Processing Models
T Meng
UCLA, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–13