Select a calendar:
Filter November Events by Event Type:
SUNMONTUEWEDTHUFRISAT
Events for November 01, 2018
-
NL Seminar-Exposing Brittleness in Reading Comprehension Systems
Thu, Nov 01, 2018 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Robin Jia, Stanford University
Talk Title: Exposing Brittleness in Reading Comprehension Systems
Series: Natural Language Seminar
Abstract: Reading comprehension systems that answer questions over a context passage can often achieve high test accuracy, but they are frustratingly brittle: they often rely heavily on superficial cues, and therefore struggle on out-of-domain inputs. In this talk, I will describe our work on understanding and challenging these systems. First, I will show how to craft adversarial reading comprehension examples by adding irrelevant distracting text to the context passage. Next, I will present the newest version of the SQuAD dataset, SQuAD 2.0, which tests whether models can distinguish answerable questions from similar but unanswerable ones. Finally, I will share some observations from our recent attempts to use reading comprehension systems as a natural language interface for building other NLP systems.
Biography: Robin Jia is a fifth-year PhD student advised by Percy Liang at Stanford University. He is an NSF Graduate Fellow, and has received Outstanding Paper and Best Short Paper Awards from EMNLP and ACL, respectively.
Host: Xusen Yin
More Info: http://nlg.isi.edu/nl-seminar/
Location: Information Science Institute (ISI) - 6th Floor Conf Rm-CR# 689
Audiences: Everyone Is Invited
Contact: Peter Zamar
Event Link: http://nlg.isi.edu/nl-seminar/
-
CS Distinguished Lecture: Cynthia Dwork (Harvard University) - Skewed or Rescued? The Emerging Theory of Algorithmic Fairness
Thu, Nov 01, 2018 @ 03:30 PM - 04:50 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Cynthia Dwork, Harvard University
Talk Title: Skewed or Rescued? The Emerging Theory of Algorithmic Fairness
Series: Computer Science Distinguished Lecture Series
Abstract: Data, algorithms, and systems have biases embedded within them reflecting designers' explicit and implicit choices, historical biases, and societal priorities. They form, literally and inexorably, a codification of values. 'Unfairness' of algorithms - for tasks ranging from advertising to recidivism prediction - has attracted considerable attention in the popular press. The talk will discuss recent work in the nascent mathematically rigorous study of fairness in classification and scoring.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Cynthia Dwork, the Gordon McKay Professor of Computer Science at the John A. Paulson School of Engineering and Applied Sciences at Harvard, the Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study, and an Affiliated Faculty Member at Harvard Law School, is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. With seminal contributions in cryptography, distributed computing, and ensuring statistical validity, her most recent focus is on fairness in classification algorithms. Dwork is a member of the US National Academy of Sciences, the US National Academy of Engineering, and the American Philosophical Society, and is a Fellow of the American Academy of Arts and Sciences and of the ACM.
Host: Computer Science Department
Location: Henry Salvatori Computer Science Center (SAL) - 101
Audiences: Everyone Is Invited
Contact: Computer Science Department
-
VITERBI STUDENT SPEAKER SYMPOSIUM
Thu, Nov 01, 2018 @ 05:00 PM - 07:00 PM
Viterbi School of Engineering Student Affairs
Workshops & Infosessions
The Viterbi Engineering Writing Program presents the Viterbi Student Speaker Symposium! Come listen to undergraduate and graduate students speak about some of the most important issues in engineering today.
HOPE TO SEE YOU THERE!
Email mjt@usc.edu with any questions.Location: Ronald Tutor Hall of Engineering (RTH) - 211
Audiences: Everyone Is Invited
Contact: Helen Choi