April 27, 2020
We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a Gumbel-Softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.
Research Areas
August 01, 2019
Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang
August 01, 2019
July 29, 2019
Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li
July 29, 2019
June 11, 2019
Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
June 11, 2019
June 10, 2019
Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato
June 10, 2019