QAInfomax: Learning robust question answering system by mutual information maximization.
Published in The 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Yi-Ting Yeh, and Yun-Nung Chen.
Full Paper: arXiv
Code: GitHub
Video: Vimeo
Standard accuracy metrics indicate that modern reading comprehension systems have achieved strong performance in many question answering datasets.
However, the extent these systems truly understand language remains unknown, and existing systems are not good at distinguishing distractor sentences, which look related but do not actually answer the question.
To address this problem, we propose QAInfomax as a regularizer in reading comprehension systems by maximizing mutual information among passages, a question, and its answer.
QAInfomax helps regularize the model to not simply learn the superficial correlation for answering questions.
The experiments show that our proposed QAInfomax achieves the state-of-the-art performance on the benchmark Adversarial-SQuAD dataset.