Tue, Nov 07, 2017 @ 03:30 PM - 04:50 PM
Conferences, Lectures, & Seminars
Speaker: Danqi Chen, Stanford
Talk Title: From Reading Comprehension to Open-Domain Question Answering
Series: CS Colloquium
Abstract: This lecture satisfies requirements for CSCI 591: Research Colloquium.
Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved, goal of NLP. This task of reading comprehension (i.e., question answering over a passage of text) has received a resurgence of interest, due to the creation of large-scale datasets and well-designed neural network models.
I will talk about how we build simple and effective models for advancing a machine's ability at reading comprehension. I'll focus on explaining the logical structure behind these neural architectures and discussing the capacities of these models as well as their limits.
Next I'll talk about how we combine state-of-the-art reading comprehension systems with traditional IR components to build a new generation of open-domain question answering systems. Our system is much simpler than traditional QA systems and able to answer questions efficiently over the full English Wikipedia and shows great promise on multiple QA benchmarks.
Biography: Danqi Chen is a Ph.D. candidate in Computer Science at Stanford University, advised by Christopher Manning. She works on deep learning for natural language processing, and is particularly interested in the intersection between text understanding and knowledge representation/reasoning. Her research spans from machine comprehension/question answering to knowledge base construction and syntactic parsing, with an emphasis on building principled yet highly effective models. She is a recipient of a Facebook Fellowship, a Microsoft Research Women's Fellowship and outstanding paper awards at ACL'16 and EMNLP'17. Previously, she received her B.S. with honors from Tsinghua University in 2012.
Host: Fei Sha
Audiences: Everyone Is Invited
Contact: Computer Science Department