NLP

Luna: Linear Unified Nested Attention

October 26, 2021

Abstract

The quadratic computational and memory complexities of the Transformer’s at-tention mechanism have limited its scalability for modeling long sequences. Inthis paper, we propose Luna, a linear unified nested attention mechanism thatapproximates softmax attention withtwo nested linear attention functions, yieldingonly linear (as opposed to quadratic) time and space complexity. As compared toa more traditional attention mechanism, Luna introduces an additional sequencewith a fixed length as input and an additional corresponding output, which allowsLuna to perform attention operation linearly, while also storing adequate contextualinformation. We perform extensive evaluations on three benchmarks of sequencemodeling tasks: long-context sequence modeling, neural machine translation andmasked language modeling for large-scale pretraining. Competitive or even betterexperimental results demonstrate both the effectiveness and efficiency of Lunacompared to a variety of strong baseline methods including the full-rank attentionand other efficient sparse and dense attention methods. The implementation of ourmodel is available at https://github.com/XuezheMax/fairseq-apollo

Download the Paper

AUTHORS

Written by

Xuezhe Ma

Xiang Kong

Sinong Wang

Chunting Zhou

Jonathan May

Hao Ma

Luke Zettlemoyer

Publisher

NeurIPS

Related Publications

April 14, 2024

SPEECH & AUDIO

NLP

CoLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders

Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung

April 14, 2024

February 21, 2024

INTEGRITY

NLP

Watermarking Makes Language Models Radioactive

Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon

February 21, 2024

December 07, 2023

CONVERSATIONAL AI

NLP

Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations

Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Davide Testuggine, Madian Khabsa

December 07, 2023

December 06, 2023

NLP

Polar Ducks and Where to Find Them: Enhancing Entity Linking with Duck Typing and Polar Box Embeddings

Mattia Atzeni, Mike Plekhanov, Frederic Dreyer, Nora Kassner, Simone Merello, Louis Martin, Nicola Cancedda

December 06, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.