Seguir
Fahim Dalvi
Fahim Dalvi
Qatar Computing Research Institute
Email confirmado em hbku.edu.qa - Página inicial
Título
Citado por
Citado por
Ano
What do Neural Machine Translation Models Learn about Morphology?
Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass
arXiv preprint arXiv:1704.03471, 2017
4642017
Fighting the COVID-19 infodemic: modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society
F Alam, S Shaar, F Dalvi, H Sajjad, A Nikolov, H Mubarak, GDS Martino, ...
arXiv preprint arXiv:2005.00033, 2020
284*2020
Identifying and Controlling Important Neurons in Neural Machine Translation
A Bau, Y Belinkov, H Sajjad, N Durrani, F Dalvi, J Glass
arXiv preprint arXiv:1811.01157, 2018
2042018
What is one grain of sand in the desert? analyzing individual neurons in deep nlp models
F Dalvi, N Durrani, H Sajjad, Y Belinkov, A Bau, J Glass
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 6309-6317, 2019
2012019
Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks
Y Belinkov, L Màrquez, H Sajjad, N Durrani, F Dalvi, J Glass
Proceedings of the Eighth International Joint Conference on Natural Language …, 2017
1332017
Findings of the IWSLT 2020 Evaluation Campaign
E Ansari, A Axelrod, N Bach, O Bojar, R Cattoni, F Dalvi, N Durrani, ...
Proceedings of the 17th International Conference on Spoken Language …, 2020
1322020
On the effect of dropping layers of pre-trained transformer models
H Sajjad, F Dalvi, N Durrani, P Nakov
Computer Speech & Language 77, 101429, 2023
1302023
Poor man’s bert: Smaller and faster transformer models
H Sajjad, F Dalvi, N Durrani, P Nakov
arXiv preprint arXiv:2004.03844 2 (2), 2020
1132020
Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation
F Dalvi, N Durrani, H Sajjad, S Vogel
Proceedings of the 2018 Conference of the North American Chapter of the …, 2018
1072018
Analyzing Individual Neurons in Pre-trained Language Models
N Durrani, H Sajjad, F Dalvi, Y Belinkov
arXiv preprint arXiv:2010.02695, 2020
1022020
Analyzing redundancy in pretrained transformer models
F Dalvi, H Sajjad, N Durrani, Y Belinkov
arXiv preprint arXiv:2004.04010, 2020
992020
Similarity Analysis of Contextual Word Representation Models
JM Wu, Y Belinkov, H Sajjad, N Durrani, F Dalvi, J Glass
arXiv preprint arXiv:2005.01172, 2020
842020
On the Linguistic Representational Power of Neural Machine Translation Models
Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass
Computational Linguistics 46 (1), 1-52, 2020
822020
Neuron-level interpretation of deep nlp models: A survey
H Sajjad, N Durrani, F Dalvi
Transactions of the Association for Computational Linguistics 10, 1285-1303, 2022
782022
Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder
F Dalvi, N Durrani, H Sajjad, Y Belinkov, S Vogel
Proceedings of the Eighth International Joint Conference on Natural Language …, 2017
762017
Discovering Latent Concepts Learned in BERT
F Dalvi, AR Khan, F Alam, N Durrani, J Xu, H Sajjad
International Conference on Learning Representations, 2021
692021
NeuroX: A toolkit for analyzing individual neurons in neural networks
F Dalvi, A Nortonsmith, A Bau, Y Belinkov, H Sajjad, N Durrani, J Glass
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 9851-9852, 2019
662019
How transfer learning impacts linguistic knowledge in deep NLP models?
N Durrani, H Sajjad, F Dalvi
arXiv preprint arXiv:2105.15179, 2021
562021
Neural Machine Translation Training in a Multi-Domain Scenario
H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel
arXiv preprint arXiv:1708.08712, 2017
552017
One Size Does Not Fit All: Comparing NMT Representations of Different Granularities
N Durrani, F Dalvi, H Sajjad, Y Belinkov, P Nakov
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
532019
O sistema não pode efectuar a operação agora. Tente mais tarde.
Artigos 1–20