SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics D Yin, T Meng, KW Chang ACL, 2020 | 163 | 2020 |
On the paradox of learning to reason from data H Zhang, LH Li, T Meng, KW Chang, GV Broeck IJCAI, 2022 | 105 | 2022 |
GEMNET: Effective Gated Gazetteer Representations for Recognizing Complex Entities in Low-context Input T Meng, A Fang, O Rokhlenko, S Malmasi NAACL, 2021 | 70 | 2021 |
Mitigating Gender Bias Amplification in Distribution by Posterior Regularization S Jia, T Meng, J Zhao, KW Chang ACL, 2020 | 49 | 2020 |
On the Robustness of Language Encoders against Grammatical Errors F Yin, Q Long, T Meng, KW Chang ACL, 2020 | 33 | 2020 |
Controllable Text Generation with Neurally-Decomposed Oracle T Meng, S Lu, N Peng, KW Chang NeurIPS, 2022 | 31 | 2022 |
Target language-aware constrained inference for cross-lingual dependency parsing T Meng, N Peng, KW Chang EMNLP, 2019 | 30 | 2019 |
Insnet: An efficient, flexible, and performant insertion-based text generation model S Lu, T Meng, N Peng NeurIPS, 2022 | 16* | 2022 |
An Integer Linear Programming Framework for Mining Constraints from Data T Meng, KW Chang ICML, 2021 | 11 | 2021 |
Monotonic paraphrasing improves generalization of language model prompting Q Liu, F Wang, N Xu, T Yan, T Meng, M Chen arXiv preprint arXiv:2403.16038, 2024 | 5 | 2024 |
Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification T Meng, N Mehrabi, P Goyal, A Ramakrishna, A Galstyan, R Zemel, ... arXiv preprint arXiv:2410.05559, 2024 | | 2024 |
Control Large Language Models via Divide and Conquer B Li, Y Wang, T Meng, KW Chang, N Peng arXiv preprint arXiv:2410.04628, 2024 | | 2024 |
Constrained Inference and Decoding for Controlling Natural Language Processing Models T Meng UCLA, 2024 | | 2024 |