Thu, Nov 12, 2020 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: John Hewitt, Stanford University
Talk Title: The Unreasonable Syntactic Expressivity of RNNs
Series: NL Seminar
Abstract: In 2015, Andrej Karpathy posted a now famous blog post on The Unreasonable Effectiveness of Recurrent Neural Networks. To summarize this sense of wonder, Karpathy emphasized We will train RNNs to generate text character by character and ponder the question how is that even possible? RNNs empirically generate natural language with high syntactic fidelity, but their success is not well understood theoretically. In this talk, I will provide theoretical insight into this success, proving in a finite precision setting that RNNs can efficiently generate bounded hierarchical languages that reflect the scaffolding of natural language syntax. I will introduce Dyck k,m, the language of well nested brackets of k types and m bounded nesting depth, reflecting the bounded memory needs and long distance dependencies of natural language syntax. The best previously known results use Ok m 2 memory hidden units to generate these languages. I will prove that an RNN with O m log k hidden units suffices, an exponential reduction in memory, by an explicit construction. Finally, I will show that no algorithm, even with unbounded computation, can suffice with o m log k hidden units.
Biography: John is a 3rd year PhD student in computer science at Stanford University, advised by Chris Manning and Percy Liang. He works on understanding and improving how unsupervised neural networks learn and process human languages. He is supported by a National Science Foundation Graduate Research Fellowship, and is the recipient of an EMNLP Runner Up Best Paper award.
Host: Jon May and Mozhdeh Gheini
More Info: https://nlg.isi.edu/nl-seminar/
WebCast Link: https://usc.zoom.us/j/95584315616
Audiences: Everyone Is Invited
Contact: Petet Zamar