Are vision-language transformers learning multimodal representations? a probing perspective E Salin, B Farah, S Ayache, B Favre Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11248 …, 2022 | 30 | 2022 |
Do vision-and-language transformers learn grounded predicate-noun dependencies? M Nikolaus, E Salin, S Ayache, A Fourtassi, B Favre arXiv preprint arXiv:2210.12079, 2022 | 10 | 2022 |
Etude de la compréhension spatiale multimodale des modèles transformers vision-langage E Salin Journées Jointes des Groupements de Recherche Linguistique Informatique …, 2022 | 3 | 2022 |
Masked ELMo: An evolution of ELMo towards fully contextual RNN language models G Senay, E Salin arXiv preprint arXiv:2010.04302, 2020 | 1 | 2020 |
Etude de la compréhension multimodale des modèles transformeurs vision-langage E Salin | | 2024 |
Study of the multimodal understanding of vision-language transformer models E Salin Aix Marseille université, 2023 | | 2023 |
État des lieux des Transformers Vision-Langage: Un éclairage sur les données de pré-entraînement E Salin 18e Conférence en Recherche d'Information et Applications\\16e Rencontres …, 2023 | | 2023 |
Towards an Exhaustive Evaluation of Vision-Language Foundation Models E Salin, S Ayache, B Favre Proceedings of the IEEE/CVF International Conference on Computer Vision, 339-352, 2023 | | 2023 |
Towards a better Understanding of Vision-Language Transformer Models E Salin | | |