Select a calendar:
Filter July Events by Event Type:
SUNMONTUEWEDTHUFRISAT
Events for July 21, 2017
-
CAIS Seminar: Dr. Amy Greenwald (Brown University) - The Interplay of Agent and Market Design
Fri, Jul 21, 2017 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Dr. Amy Greenwald, Brown University
Talk Title: The Interplay of Agent and Market Design
Series: Center for AI in Society (CAIS) Seminar Series
Abstract: We humans make hundreds of routine decisions daily. More often than not, the impact of our decisions depends on the decisions of others. As AI progresses, we are offloading more and more of these decisions to artificial agents. Dr. Greenwald's research is aimed at building AI agents that make effective decisions in multi-agent--part human, part artificial--environments. The bulk of her efforts in this space have been relevant to economic domains, mostly in service of perfecting market designs. In this talk, she will discuss AI agent design in applications ranging from renewable energy markets to online ad exchanges to wireless spectrum auctions
Biography: Dr. Amy Greenwald is an Associate Professor of Computer Science at Brown University in Providence, Rhode Island. She studies game-theoretic and economic interactions among computational agents, applied to areas like autonomous bidding in wireless spectrum auctions and ad exchanges. In 2011, she was named a Fulbright Scholar to the Netherlands (though she declined the award). She was awarded a Sloan Fellowship in 2006; she was nominated for the 2002 Presidential Early Career Award for Scientists and Engineers (PECASE); and she was named one of the Computing Research Association's Digital Government Fellows in 2001. Before joining the faculty at Brown, Dr. Greenwald was employed by IBM's T.J. Watson Research Center. Her paper entitled "Shopbots and Pricebots" (joint work with Jeff Kephart) was named Best Paper at IBM Research in 2000.
Host: Milind Tambe
Location: Ronald Tutor Hall of Engineering (RTH) - 217
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
NL Seminar- Neural Sequence Models: Interpretation and Augmentation
Fri, Jul 21, 2017 @ 03:00 PM - 04:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Xing Shi, USC/ISI
Talk Title: Neural Sequence Models: Interpretation and Augmentation
Series: Natural Language Seminar
Abstract: Recurrent neural networks RNN have been successfully applied to various Natural Language Processing tasks, including language modeling, machine translation, text generation, etc. However, several obstacles still stand in the way: First, due to the RNN's distributional nature, few interpretations of its internal mechanism are obtained, and it remains a black box. Second, because of the large vocabulary sets involved, the text generation is very time consuming. Third, there is no flexible way to constrain the generation of the sequence model with external knowledge. Last, huge training data must be collected to guarantee the performance of these neural models, whereas annotated data such as parallel data used in machine translation are expensive to obtain. This work aims to address the four challenges mentioned above.
To further understand the internal mechanism of the RNN, I choose neural machine translation NMT systems as a testbed. I first investigate how NMT outputs target strings of appropriate lengths, locating a collection of hidden units that learns to explicitly implement this functionality. Then I investigate whether NMT systems learn source language syntax as a by product of training on string pairs. I find that both local and global syntactic information about source sentences is captured by the encoder. Different types of syntax are stored in different layers, with different concentration degrees.
To speed up text generation, I proposed two novel GPU-based algorithms. 1 Utilize the source/target words alignment information to shrink the target side run-time vocabulary. 2 Apply locality sensitive hashing to find nearest word embeddings. Both methods lead to a 2-3x speedup on four translation tasks without hurting machine translation accuracy as measured by BLEU. Furthermore, I integrate a finite state acceptor into the neural sequence model during generation, providing a flexible way to constrain the output, and I successfully apply this to poem generation, in order to control the pentameter and rhyme.
Based on above success, I propose to work on the following. 1 Go one further step towards interpretation: find unit feature mappings, learn the unit temporal behavior, and understand different hyper-parameter settings. 2 Improve NMT performance on low-resource language pairs by fusing an external language model, feeding explicit target-side syntax and utilizing better word embeddings.
Biography: Xing Shi is a PhD student at ISI working with Prof. Kevin Knight.
Host: Marjan Ghazvininejad and Kevin Knight
More Info: http://nlg.isi.edu/nl-seminar/
Location: Information Science Institute (ISI) - 11th Flr Conf Rm # 1135, Marina Del Rey
Audiences: Everyone Is Invited
Contact: Peter Zamar
Event Link: http://nlg.isi.edu/nl-seminar/