Learning to Distract: A Hierarchical Multi-Decoder Network for Automated Generation of Long Distractors for Multiple-Choice Questions for Reading Comprehension

Maurya, Kaushal Kumar and Desarkar, Maunendra Sankar (2020) Learning to Distract: A Hierarchical Multi-Decoder Network for Automated Generation of Long Distractors for Multiple-Choice Questions for Reading Comprehension. In: International Conference on Information and Knowledge Management, Proceedings, 19 October 2020 - 23 October 2020.

Full text not available from this repository. (Request a copy)

Abstract

The task of generating incorrect options for multiple-choice questions is termed as distractor generation problem. The task requires high cognitive skills and is extremely challenging to automate. Existing neural approaches for the task leverage encoder-decoder architecture to generate long distractors. However, in this process two critical points are ignored - firstly, many methods use Jaccard similarity over a pool of candidate distractors to sample the distractors. This often makes the generated distractors too obvious or not relevant to the question context. Secondly, some approaches did not consider the answer in the model, which caused the generated distractors to be either answer-revealing or semantically equivalent to the answer. In this paper, we propose a novel Hierarchical Multi-Decoder Network (HMD-Net) consisting of one encoder and three decoders, where each decoder generates a single distractor. To overcome the first problem mentioned above, we include multiple decoders with a dis-similarity loss in the loss function. To address the second problem, we exploit richer interaction between the article, question, and answer with a SoftSel operation and a Gated Mechanism. This enables the generation of distractors that are in context with questions but semantically not equivalent to the answers. The proposed model outperformed all the previous approaches significantly in both automatic and manual evaluations. In addition, we also consider linguistic features and BERT contextual embedding with our base model which further push the model performance.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Maurya, Kaushal KumarUNSPECIFIED
Desarkar, Maunendra SankarUNSPECIFIED
Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: Automated generation; Cognitive skill; Encoder-decoder architecture; Linguistic features; Loss functions; Model performance; Multiple choice questions; Reading comprehension
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 10 Aug 2021 05:52
Last Modified: 10 Aug 2021 05:52
URI: http://raiithold.iith.ac.in/id/eprint/8782
Publisher URL: http://doi.org/10.1145/3340531.3411997
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 8782 Statistics for this ePrint Item