Deep Learning for Reading Comprehension
In this talk, Ruslan will discuss deep learning models that can find semantically meaningful representations of words, learn to read documents and answer questions about their content. He will first introduce the Gated-Attention (GA) Reader model that integrates a multi-hop architecture with a novel attention mechanism based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. He will next show how we can encode external linguistic knowledge as an explicit memory in recurrent neural networks, and use it to model coreference relations in text. Finally, he will introduce a two-step learning system to question answering from unstructured text, consisting of a retrieval step and a reading comprehension step. The speaker will show that on several tasks, these models significantly improve upon many of the existing state-of-the-art techniques.