February 24, 2023
We introduce LLaMA, a collection of founda- tion language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla- 70B and PaLM-540B. We release all our models to the research community.
Written by
Faisal Azhar
Hugo Touvron
Armand Joulin
Aurelien Rodriguez
Baptiste Rozière
Eric Hambro
Gautier Izacard
Guillaume Lample
Marie-Anne Lachaux
Naman Goyal
Thibaut Lavril
Timothee Lacroix
Xavier Martinet
Edouard Grave
Publisher
ArXiV
Research Topics
February 20, 2023
Maziar Sanjabi, Aaron Chan, Hamed Firooz, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren
February 20, 2023
December 31, 2022
Yossef Mordechay Adi, Abdelrahman Mohamed, Adam Polyak, Emmanuel Dupoux, Evgeny Kharitonov, Jade Copet, Morgane Rivière, Tu Anh Nguyen, Wei-Ning Hsu, Felix Kreuk
December 31, 2022
December 29, 2022
Dexter Ju, Jason Weston, Sainbayar Sukhbaatar, Stephen Roller
December 29, 2022
December 15, 2022
Shubham Toshniwal, Karen Livescu, Kevin Gimpel, Sam Wiseman
December 15, 2022