Research

Q&A with Facebook AI residents Tatiana Likhomanenko and Siddharth Karamcheti

March 15, 2019

Facebook’s Artificial Intelligence (AI) Residency Program is a full-time, one-year research training opportunity designed to offer hands-on industry experience with machine learning research while working within Facebook AI. The program is a unique opportunity to be paired with Facebook researchers and engineers to collaborate on mutually interesting research problems and devise new deep learning techniques to solve those problems, which can often address real world challenges.

We recently caught up with two of the program’s current Residents, Tatiana Likhomanenko and Siddharth Karamcheti, to learn a little more about them.

Tatiana joined Facebook as an AI Resident in Menlo Park, California in September 2018 on the Facebook AI Research (FAIR) speech team working with Ronan Collobert and Gabriel Synnaeve. She received her master’s degree in computer science from Lomonosov Moscow State University (MSU) and graduated from the Yandex School of Data Analysis. For 4 years she worked on applications of machine learning to High Energy Physics as a researcher in the scientific joint lab at Yandex, a Russian search-engine company, and CERN. In 2017, she defended her PhD in mixed type partial differential equations from the computer science department at MSU.

Siddharth did his undergrad at Brown University in Rhode Island, where he was a double concentrator in both computer science and literary arts (creative writing – fiction). “With a love for both CS and language/literature, I spent my first few years of college looking for a way to fuse the two, and natural language processing (NLP) was my in,” he explains. Siddharth is an AI Resident in New York City, collaborating with Rob Fergus, Jason Weston, Dhruv Batra, Douwe Kiela and Arthur Szlam.

Here’s our Q&A with the two of them.

Q: What are your current research interests?

Tatiana: My research interests are speech synthesis, video recognition, functional analysis, high energy physics and astrophysics. In my PhD I studied a model equation for subsonic and supersonic processes described with PDEs of mixed type via constructing the eigenfunctions of such problems in the closed form. Main results of this study refer to the functional analysis, e.g. Riesz basis properties of constructed eigenfunctions in the closed form.

Since childhood I’ve loved math and physics and studied them passionately to understand quantum mechanics and differentiable geometry, to get the idea how the universe works. Participating in research at CERN helped to understand how we can study our universe.

Siddharth: My goal is to build agents that can communicate effectively with humans and act safely in different environments. I’m also interested in the role language has in ensuring safe and interpretable behavior. When used in instructional settings, language provides meaningful constraints on action, in some cases specifying a concrete goal to achieve, and in others dictating exactly how a task should be completed. I’m extremely interested in ways we can reason about the mapping from language to behavior in order to guarantee or certify that an agent is actually behaving according to a language instruction (as opposed to following only certain parts of it, or ignoring it altogether).

After this year as a Resident, I’ll be starting my PhD in Computer Science at Stanford, where I hope to continue doing work in machine learning and natural language processing.

Q: What are you currently working on?

Tatiana: For my AI Residency I’ve had the chance to join research in a new area for me: automated speech recognition (ASR) systems. Nowadays, most state-of-the-art ASR systems use large lexicon vocabularies and are limited to recognizing only words from this vocabulary. Building an ASR system that could recognize words outside of its lexicon is currently my main research focus.

Another focus of mine is improving ASR systems with unsupervised learning when a lot of text and acoustic data are available, but the amount of transcripts is limited. Learning such representations could be very useful for speech recognition of low-resource languages.

Siddharth: My primary project is focused on using language to aid efficient generalization in the context of reinforcement learning.

To motivate this, consider a household robot tasked with helping out with various everyday chores – tasks like making coffee or tea in the morning, cleaning, and taking out the trash. While these different chores to perform can vary greatly, each task can be broken up into simple components (like opening a cupboard, or turning on a switch), with different components appearing in multiple different tasks. In addition to helping simplify behavior, we can also combine these bite-sized components in new ways, to perform brand new tasks. This allows for efficient cross-task generalization: instead of teaching a robot how to perform a new task from scratch (which can take a significant amount of time), we need only teach it how to put together the components it has already learned (an undertaking that is considerably more efficient).

While our work is still in progress, the key idea is to learn how to break tasks down into these components by looking at how teachers use language to teach students new skills.Our preliminary investigation shows that in such settings, teachers break down high level tasks into bite-sized language instructions roughly corresponding to the different simple components to complete. Our hope is that by learning how to decompose a high-level task into the requisite language instructions, we can allow agents to pick up new behaviors in a way that is both efficient and robust.

Q: Why Facebook’s program?

Tatiana: I chose to do a residency at Facebook because the values of openness and sharing new ideas, one of the main parts of Facebook culture, are great for productive research. I enjoy being a part of the AI Residency Program at FAIR because the research done here is not only interesting but also immediately applicable to real life. For example, research in speech recognition and machine translation would mean being able to transcribe audio and video accurately in any language. Breaking down language barriers and improving accessibility would let us share our knowledge, dreams and experiences with those with hearing disabilities and speakers of other languages across the world.

Siddharth: The people. There are so many extremely talented researchers working on the problems I care about—problems in grounding, language-based interaction, reinforcement learning, emergent communication… the list goes on. Furthermore, I felt the researchers I’d be working with would teach me so many new skills, and instill in me an extremely wide space of problems that people are working on—something I really wanted before I focus in on a single area during my PhD.

To learn more about how to become an AI Resident at Facebook, visit the AI Residency Program page.