November 16, 2020
This paper introduces a conceptually simple, scalable, and highly effective BERT-based en- tity linking model, along with an extensive evaluation of its accuracy-speed trade-off. We present a two-stage zero-shot linking algo- rithm, where each entity is defined only by a short textual description. The first stage does retrieval in a dense space defined by a bi-encoder that independently embeds the mention context and the entity descriptions. Each candidate is then re-ranked with a cross- encoder, that concatenates the mention and en- tity text. Experiments demonstrate that this approach is state of the art on recent zero- shot benchmarks (6 point absolute gains) and also on more established non-zero-shot eval- uations (e.g. TACKBP-2010), despite its rel- ative simplicity (e.g. no explicit entity em- beddings or manually engineered mention ta- bles). We also show that bi-encoder link- ing is very fast with nearest neighbour search (e.g. linking with 5.9 million candidates in 2 milliseconds), and that much of the ac- curacy gain from the more expensive cross- encoder can be transferred to the bi-encoder via knowledge distillation. Our code and models are available at https://github. com/facebookresearch/BLINK.
Publisher
EMNLP
Research Topics
April 14, 2024
Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung
April 14, 2024
February 21, 2024
Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon
February 21, 2024
December 07, 2023
Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Davide Testuggine, Madian Khabsa
December 07, 2023
December 06, 2023
Mattia Atzeni, Mike Plekhanov, Frederic Dreyer, Nora Kassner, Simone Merello, Louis Martin, Nicola Cancedda
December 06, 2023
Product experiences
Foundational models
Product experiences
Latest news
Foundational models