November 2, 2019
The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-García et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods are all made open source and publicly available.
August 01, 2019
Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang
August 01, 2019
July 29, 2019
Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li
July 29, 2019
June 11, 2019
Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
June 11, 2019
June 10, 2019
Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato
June 10, 2019