The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process H Mei, J Eisner arXiv, 2016 | 733 | 2016 |
What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment H Mei, M Bansal, MR Walter NAACL, 2016 | 332 | 2016 |
Listen, attend, and walk: Neural mapping of navigational instructions to action sequences H Mei, M Bansal, MR Walter AAAI, 2016 | 284 | 2016 |
Coherent Dialogue with Attention-based Language Models H Mei, M Bansal, MR Walter AAAI, 2017 | 118 | 2017 |
Imputing missing events in continuous-time event streams H Mei, G Qin, J Eisner International Conference on Machine Learning, 4475-4485, 2019 | 55 | 2019 |
Transformer embeddings of irregularly spaced events and their participants C Yang, H Mei, J Eisner arXiv preprint arXiv:2201.00044, 2021 | 40 | 2021 |
Language models can improve event prediction by few-shot abductive reasoning X Shi, S Xue, K Wang, F Zhou, J Zhang, J Zhou, C Tan, H Mei Advances in Neural Information Processing Systems 36, 2024 | 35 | 2024 |
Easytpp: Towards open benchmarking the temporal point processes S Xue, X Shi, Z Chu, Y Wang, F Zhou, H Hao, C Jiang, C Pan, Y Xu, ... arXiv preprint arXiv:2307.08097, 2023 | 32 | 2023 |
Statler: State-maintaining language models for embodied reasoning T Yoneda, J Fang, P Li, H Zhang, T Jiang, S Lin, B Picker, D Yunis, H Mei, ... 2024 IEEE International Conference on Robotics and Automation (ICRA), 15083 …, 2024 | 30 | 2024 |
Hypro: A hybridly normalized probabilistic model for long-horizon prediction of event sequences S Xue, X Shi, J Zhang, H Mei Advances in Neural Information Processing Systems 35, 34641-34650, 2022 | 30 | 2022 |
Can large language models play text games well? current state-of-the-art and open questions CF Tsai, X Zhou, SS Liu, J Li, M Yu, H Mei arXiv preprint arXiv:2304.02868, 2023 | 28 | 2023 |
Noise-contrastive estimation for multivariate point processes H Mei, T Wan, J Eisner Advances in neural information processing systems 33, 5204-5214, 2020 | 24 | 2020 |
Neural Datalog through time: Informed temporal modeling via logical specification H Mei, G Qin, M Xu, J Eisner International Conference on Machine Learning, 6808-6819, 2020 | 21 | 2020 |
Hidden state variability of pretrained language models can guide computation reduction for transfer learning S Xie, J Qiu, A Pasad, L Du, Q Qu, H Mei arXiv preprint arXiv:2210.10041, 2022 | 20 | 2022 |
Robustness of learning from task instructions J Gu, H Zhao, H Xu, L Nie, H Mei, W Yin arXiv preprint arXiv:2212.03813, 2022 | 19 | 2022 |
Personalized dynamic treatment regimes in continuous time: a Bayesian approach for optimizing clinical decisions with timing W Hua, H Mei, S Zohar, M Giral, Y Xu Bayesian Analysis 17 (3), 849-878, 2022 | 19 | 2022 |
Accurate Vision-based Vehicle Localization using Satellite Imagery H Chu, H Mei, M Bansal, MR Walter NIPS 2015 Transfer and Multi-Task Learning workshop, 2015 | 16 | 2015 |
Tiny-attention adapter: Contexts are more important than the number of parameters H Zhao, H Tan, H Mei arXiv preprint arXiv:2211.01979, 2022 | 15 | 2022 |
Bellman meets hawkes: Model-based reinforcement learning via temporal point processes C Qu, X Tan, S Xue, X Shi, J Zhang, H Mei Proceedings of the AAAI Conference on Artificial Intelligence 37 (8), 9543-9551, 2023 | 14 | 2023 |
Explicit planning helps language models in logical reasoning H Zhao, K Wang, M Yu, H Mei arXiv preprint arXiv:2303.15714, 2023 | 13 | 2023 |