1047 results for ""
September 20, 2019
Facebook AI is releasing code for a self-supervised technique that uses AI-generated questions to train NLP systems, avoiding the need for labeled question answering training data.
October 18, 2019
Facebook AI is developing alternative ways to train our AI systems so that we can do more with less labeled training data overall. Learn how our “semi-weak supervision” method is delivering state-of-the-art performance for highly efficient, production-ready models.
October 03, 2019
We are releasing a new benchmark and data set to evaluate performance across various neural code search techniques to make it easier to evaluate a new model on a common set of questions.
September 13, 2019
Facebook is at Interspeech 2019! For those attending the conference in Graz, Austria this week, be sure to stop by booth F7 to connect with recruiters, researchers, and software engineers about speech research at Facebook. Learn more about Facebook Research at Interspeech in our blog..
Facebook AI is open-sourcing Hydra, a new framework whose dynamic approach to configuration will accelerate the development of complex Python applications.
September 06, 2019
Facebook AI has open sourced MiniRTSv2, a real-time strategy game designed to test and evaluate a range of AI techniques related to reinforcement learning, hierarchical decision-making and natural language processing.
September 19, 2019
Facebook AI is releasing code for wav2vec, a self-supervised algorithm that uses raw audio to improve automatic speech recognition models, outperforming traditional systems that rely solely on transcribed audio.
October 29, 2019
Facebook AI Research Scientist Alex Berg is one of the recipients of the 2019 Helmholtz prize for fundamental contributions in computer vision.
November 25, 2019
The lottery ticket hypothesis suggests that by training DNNs from “lucky” initializations, we can train networks which are 10-100x smaller with minimal performance losses. In new work, we extend our understanding of this phenomenon in several ways.
November 07, 2019
Facebook AI is sharing MLQA, an extractive question answering (QA) evaluation benchmark aligned across Arabic, German, Hindi, Spanish, Vietnamese, and Simpliﬁed Chinese. It will help the AI community improve and extend QA in more languages.
Facebook AI is open-sourcing XLM-R, a multilingual model that uses self-supervised training to achieve state-of-the-art performance on four cross-lingual understanding benchmarks.
November 18, 2019
At Facebook, we want to ensure that diverse perspectives are shaping the future of AI. We sat down with four leaders in the Women in AI community to learn more about their inclusion efforts, the challenges they’ve encountered, and what motivates them.
Facebook © 2020