Follow
Marie-Anne Lachaux
Marie-Anne Lachaux
Mistral AI
Verified email at mistral.ai
Title
Cited by
Cited by
Year
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
54892023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
40852023
Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring
S Humeau, K Shuster, MA Lachaux, J Weston
arXiv preprint arXiv:1905.01969, 2019
5192019
CCNet: Extracting high quality monolingual datasets from web crawl data
G Wenzek, MA Lachaux, A Conneau, V Chaudhary, F Guzmán, A Joulin, ...
arXiv preprint arXiv:1911.00359, 2019
4832019
Unsupervised translation of programming languages
MA Lachaux, B Roziere, L Chanussot, G Lample
arXiv preprint arXiv:2006.03511, 2020
328*2020
Mistral 7B
AQ Jiang, A Sablayrolles, A Mensch, C Bamford, DS Chaplot, D Casas, ...
arXiv preprint arXiv:2310.06825, 2023
1512023
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
MA Lachaux, B Roziere, M Szafraniec, G Lample
Advances in Neural Information Processing Systems 34, 2021
119*2021
Hypertree proof search for neural theorem proving
G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ...
Advances in neural information processing systems 35, 26337-26349, 2022
672022
Mixtral of experts
AQ Jiang, A Sablayrolles, A Roux, A Mensch, B Savary, C Bamford, ...
arXiv preprint arXiv:2401.04088, 2024
522024
LLaMA: open and efficient foundation language models. arXiv
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
522023
Target conditioning for one-to-many generation
MA Lachaux, A Joulin, G Lample
arXiv preprint arXiv:2009.09758, 2020
122020
The system can't perform the operation now. Try again later.
Articles 1–11