October 27, 2023
In the era of artificial intelligence, the role of large language models (LLMs) is becoming increasingly pivotal. Despite their widespread use, their proficiency to consolidate knowledge from different training documents — a crucial ability for many applications — remains unexplored. This is the first study that investigates LLMs’ capability to combine this information effectively within their parameter space. As such, we introduce EpiK-Eval, a unique question-answering benchmark designed to assess LLMs’ skill in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations using multiple LLMs expose significant deficiencies in this area. We argue that these shortcomings stem from the intrinsic nature of current training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs.
Publisher
Arxiv
Research Topics
April 14, 2024
Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung
April 14, 2024
February 21, 2024
Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon
February 21, 2024
December 07, 2023
Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Davide Testuggine, Madian Khabsa
December 07, 2023
December 06, 2023
Mattia Atzeni, Mike Plekhanov, Frederic Dreyer, Nora Kassner, Simone Merello, Louis Martin, Nicola Cancedda
December 06, 2023
Product experiences
Foundational models
Product experiences
Latest news
Foundational models