Yongqiang Wang

Yongqiang Wang joined Facebook in January 2017 as a research scientist. He is currently a tech lead, driving the research and development of speech recognition technology for Facebook community. Prior to joining Facebook, Yongqiang was a speech scientist working on improving the Cortana experience in Microsoft since January 2014. Yongqiang received his PhD from the University of Cambridge.

Yongqiang's Publications


Transformer-based Acoustic Modeling for Hybrid Speech Recognition

We propose and evaluate transformer-based acoustic models (AMs) for hybrid speech recognition. Several modeling choices are discussed in this work, including various positional embedding methods and an iterated loss to enable training deep transformers. We also present a preliminary study of using limited right context in transformer models, which…

Yongqiang Wang, Abdelrahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, Christian Fuegen, Geoffrey Zweig, Michael L. Seltzer


Joint Grapheme and Phoneme Embeddings for Contextual End-to-End ASR

End-to-end approaches to automatic speech recognition, such as Listen-Attend-Spell (LAS), blend all components of a traditional speech recognizer into a unified model. Although this simplifies training and decoding pipelines, a unified model is hard to adapt when mismatch exists between training and test data, especially if this information is…

Yongqiang Wang, Zhehuai Chen, Mahaveer Jain, Michael L. Seltzer, Christian Fuegen


Towards End-to-End Spoken Language Understanding

Spoken language understanding system is traditionally designed as a pipeline of a number of components. First, the audio signal is processed by an automatic speech recognizer for transcription or n-best hypotheses. With the recognition results, a natural language understanding system classifies the text to structured data as domain, intent and slot…

Yongqiang Wang, Dmitriy Serdyuk, Christian Fuegen, Anuj Kumar, Baiyang Liu, Yoshua Bengio