Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:



Conferences, Lectures, & Seminars
Events for March

  • CS Colloquium: Hengshuang Zhao (University of Oxford) - Advancing Visual Intelligence via Neural System Design

    Mon, Mar 01, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Hengshuang Zhao, University of Oxford

    Talk Title: Advancing Visual Intelligence via Neural System Design

    Series: CS Colloquium

    Abstract: Building intelligent visual systems is essential for the next generation of artificial intelligence systems. It is a fundamental tool for many disciplines and beneficial to various potential applications such as autonomous driving, robotics, surveillance, augmented reality, to name a few. An accurate and efficient intelligent visual system has a deep understanding of the scene, objects, and humans. It can automatically understand the surrounding scenes. In general, 2D images and 3D point clouds are the two most common data representations in our daily life. Designing powerful image understanding and point cloud processing systems are two pillars of visual intelligence, enabling the artificial intelligence systems to understand and interact with the current status of the environment automatically. In this talk, I will first present our efforts in designing modern neural systems for 2D image understanding, including high-accuracy and high-efficiency semantic parsing structures, and unified panoptic parsing architecture. Then, we go one step further to design neural systems for processing complex 3D scenes, including semantic-level and instance-level understanding. Further, we show our latest works for unified 2D-3D reasoning frameworks, which are fully based on self-attention mechanisms. In the end, the challenges, up-to-date progress, and promising future directions for building advanced intelligent visual systems will be discussed.

    This lecture satisfies requirements for CSCI 591: Research Colloquium.

    Biography: Dr. Hengshuang Zhao is a postdoctoral researcher at the University of Oxford. Before that, he obtained his Ph.D. degree from the Chinese University of Hong Kong. His general research interests cover the broad area of computer vision, machine learning and artificial intelligence, with special emphasis on building intelligent visual systems. He and his team won several champions in competitive international challenges like ImageNet Scene Parsing Challenge. He is recognized as outstanding/top reviewers in ICCV'19 and NeurIPS'19. He receives the rising star award at the world artificial intelligence conference 2020. Some of his research projects are supported by Microsoft, Adobe, Uber, Intel, and Apple. His works have been cited for about 5,000+ times, with 5,000+ GitHub credits and 80,000+ YouTube views.

    Host: Ramakant Nevatia

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Leilani Gilpin (MIT CSAIL) - Anomaly Detection Through Explanations

    Tue, Mar 02, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Leilani Gilpin, MIT CSAIL

    Talk Title: Anomaly Detection Through Explanations

    Series: CS Colloquium

    Abstract: Under most conditions, complex systems are imperfect. When errors occur, as they inevitably will, systems need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of that error. In this talk, I present new methodologies for detecting and explaining errors in complex systems.
    My novel contribution is a system-wide monitoring architecture, which is composed of introspective, overlapping committees of subsystems.
    Each subsystem is encapsulated in a "reasonableness" monitor, an adaptable framework that supplements local decisions with commonsense data and reasonableness rules. This framework is dynamic and introspective: it allows each subsystem to defend its decisions in different contexts: to the committees it participates in and to itself. For reconciling system-wide errors, I developed a comprehensive architecture: "Anomaly Detection through Explanations (ADE)." The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be traced and queried to determine the support of a decision, and to construct counterfactual explanations. I have applied this methodology to detect incorrect labels in semi-autonomous vehicle data, and to reconcile inconsistencies in simulated, anomalous driving scenarios.

    My work has opened up the new area of explanatory anomaly detection, towards a vision in which: complex machines will be articulate by design; dynamic, internal explanations will be part of the design criteria, and system-level explanations will be able to be challenged in an adversarial proceeding.

    This lecture satisfies requirements for CSCI 591: Research Colloquium


    Biography: Leilani H. Gilpin is a research scientist at Sony AI and a collaborating researcher at MIT CSAIL. Her research focuses on enabling opaque autonomous systems to explain themselves for robust decision-making, system debugging, and accountability. Her current work integrates explainability into reinforcement learning for game-playing agents.

    She received her PhD in Electrical Engineering and Computer Science from MIT in 2020, and holds an M.S. in Computational and Mathematical Engineering from Stanford University, and a B.S. in Mathematics (with honors), B.S. in Computer Science (with highest honors), and a music minor from UC San Diego. Outside of research, Leilani enjoys swimming, cooking, and rowing.


    Host: Yan Liu

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Zhuoran Yang (Princeton University) - Demystifying (Deep) Reinforcement Learning: The Pessimist, The Optimist, and Their Provable Efficiency

    Wed, Mar 03, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Zhuoran Yang, Princeton University

    Talk Title: Demystifying (Deep) Reinforcement Learning: The Pessimist, The Optimist, and Their Provable Efficiency

    Series: CS Colloquium

    Abstract: Coupled with powerful function approximators such as deep neural networks, reinforcement learning (RL) achieves tremendous empirical successes. However, its theoretical understandings lag behind. In particular, it remains unclear how to provably attain the optimal policy with a finite regret or sample complexity. In this talk, we will present the two sides of the same coin, which demonstrates an intriguing duality between pessimism and optimism.

    - In the offline setting, we aim to learn the optimal policy based on a dataset collected a priori. Due to a lack of active interactions with the environment, we suffer from the insufficient coverage of the dataset. To maximally exploit the dataset, we propose a pessimistic least-squares value iteration algorithm, which achieves a minimax-optimal sample complexity.

    - In the online setting, we aim to learn the optimal policy by actively interacting with an environment. To strike a balance between exploration and exploitation, we propose an optimistic least-squares value iteration algorithm, which achieves a \sqrt{T} regret in the presence of linear, kernel, and neural function approximators.


    This lecture satisfies requirements for CSCI 591: Research Colloquium.

    Biography: Zhuoran Yang is a final-year Ph.D. student in the Department of Operations Research and Financial Engineering at Princeton University, advised by Professor Jianqing Fan and Professor Han Liu. Before attending Princeton, He obtained a Bachelor of Mathematics degree from Tsinghua University. His research interests lie in the interface between machine learning, statistics, and optimization. The primary goal of his research is to design a new generation of machine learning algorithms for large-scale and multi-agent decision-making problems, with both statistical and computational guarantees. Besides, he is also interested in the application of learning-based decision-making algorithms to real-world problems that arise in robotics, personalized medicine, and computational social science.

    Host: Haipeng Luo

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Abhinav Verma (University of Texas - Austin) - Neurosymbolic Reinforcement Learning

    Thu, Mar 04, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Abhinav Verma, University of Texas - Austin

    Talk Title: Neurosymbolic Reinforcement Learning

    Series: CS Colloquium

    Abstract: Recent advances in Artificial Intelligence (AI) have been driven by deep neural networks. However, neural networks have certain well-known flaws: they are difficult to interpret and verify, have high variability, and lack domain awareness. These issues create a deficiency of trust and are hence a significant impediment to the deployment of AI in safety-critical applications. In this talk, I will present work that addresses these drawbacks via neurosymbolic learning in the reinforcement learning paradigm. Neurosymbolic agents combine experience based neural learning with partial symbolic knowledge expressed via programs in a Domain Specific Language (DSL). Using a DSL provides a principled mechanism to leverage high-level abstractionsfor machine learning models, and establishes a synergistic relationship between machine learning and program synthesis.

    To overcome the challenges of policy search in non-differentiable program space we introduce a meta-algorithm that is based on mirror descent, program synthesis, and imitation learning. This approach interleaves the use of synthesized symbolic programs to regularize neural learning with the imitation of gradient-based learning to improve the quality of synthesized programs. This perspective allows us to prove robust expected regret bounds and finite-sample guarantees for this algorithm. The theoretical results guaranteeing more reliable learning are accompanied by promising empirical results on complex tasks such as learning autonomous driving agents and generating interpretable programs for behavior annotation.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Abhinav Verma is a PhD Candidate at the University of Texas at Austin, where he is advised by Swarat Chaudhuri. His research lies at the intersection of machine learning and formal methods, with a focus on building intelligent systems that are reliable, transparent, and secure. His work builds connections between the symbolic reasoning and inductive learning paradigms of artificial intelligence. He is currently supported by a JP Morgan AI Research PhD Fellowship.

    Host: Mukund Raghothaman / Bistra Dilkina

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquim: TBA

    Fri, Mar 05, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: TBA, TBA

    Talk Title: TBA

    Series: CS Colloquium

    Abstract: TBA

    Biography: TBA

    Host: Ramakant Nevatia

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Daniel Fried (UC Berkeley) - Learning Grounded Pragmatic Communication

    Fri, Mar 05, 2021 @ 12:00 PM - 01:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Daniel Fried, UC Berkeley

    Talk Title: Learning Grounded Pragmatic Communication

    Series: CS Colloquium

    Abstract: To generate language, natural language processing systems predict what to say---why not also predict how listeners will respond? We show how language generation and interpretation across varied grounded domains can be improved through pragmatic inference: explicitly reasoning about the actions and intents of the people that the systems interact with. We train neural generation and interpretation models which ground language into a world context, then layer a pragmatic inference procedure on top of these models. This pragmatic procedure predicts how human listeners will interpret text generated by the models, and reasons counterfactually about why human speakers produced the text they did. We find that this approach improves models' success at generating and interpreting instructions in real indoor environments, as well as in a challenging spatial reference dialogue task.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Daniel Fried is a final-year PhD candidate at UC Berkeley in natural language processing, advised by Dan Klein. His research focuses on language grounding: tying language to world contexts, for tasks like visual- and embodied-instruction following, text generation, and dialogue. Previously, he graduated with an MPhil from the University of Cambridge and a BS from the University of Arizona. His work has been supported by a Google PhD Fellowship, an NDSEG Fellowship, and a Churchill Scholarship.

    Host: Xiang Ren

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Mariya Toneva (Carnegie Mellon University) - Data-Driven Transfer of Insight between Brains and AI Systems

    Mon, Mar 08, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Mariya Toneva, Carnegie Mellon University

    Talk Title: Data-Driven Transfer of Insight between Brains and AI Systems

    Series: CS Colloquium

    Abstract: Several major innovations in artificial intelligence (AI) (e.g. convolutional neural networks, experience replay) are based on findings about the brain. However, the underlying brain findings took many years to first consolidate and many more to transfer to AI. Moreover, these findings were made using invasive methods in non-human species. For cognitive functions that are uniquely human, such as natural language processing, there is no suitable model organism and a mechanistic understanding is that much farther away.

    In this talk, I will present my research program that circumvents these limitations by establishing a direct connection between the human brain and AI systems with two main goals: 1) to improve the generalization performance of AI systems and 2) to improve our mechanistic understanding of cognitive functions. Lastly, I will discuss future directions that build on these approaches to investigate the role of memory in meaning composition, both in the brain and AI. This investigation will lead to methods that can be applied to a wide range of AI domains, in which it is important to adapt to new data distributions, continually learn to perform new tasks, and learn from few samples.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Mariya Toneva is a Ph.D. candidate in a joint program between Machine Learning and Neural Computation at Carnegie Mellon University, where she is advised by Tom Mitchell and Leila Wehbe. She received a B.S. in Computer Science and Cognitive Science from Yale University. Her research is at the intersection of Artificial Intelligence, Machine Learning, and Neuroscience. Mariya works on bridging language in machines with language in the brain, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems.

    Host: Yan Liu

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Saiph Savage (University of Washington) - The Future of A.I. for Social Good

    Mon, Mar 08, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Saiph Savage, University of Washington

    Talk Title: The Future of A.I. for Social Good

    Series: CS Colloquium

    Abstract: The A.I. industry has powered a futuristic reality of self-driving cars and voice assistants to help us with almost any need. However, the A.I. Industry has also created systematic challenges. For instance, while it has led to platforms where workers label data to improve machine learning algorithms, my research has uncovered that these workers earn less than minimum wage. We are also seeing the surge of A.I. algorithms that privilege certain populations and racially exclude others. If we were able to fix these challenges we could create greater societal justice and enable A.I. that better addresses people's needs, especially groups we have traditionally excluded.

    In this talk, I will discuss some of these urgent global problems that my research has uncovered from the A.I. Industry. I will present how we can start to address these problems through my proposed "A.I. For Good" framework. My framework uses value sensitive design to understand people's values and rectify harm. I will present case-studies where I use this framework to design A.I. systems that improve the labor conditions of the workers operating behind the scenes in our A.I. industry; as well as how we can use this framework to safeguard our democracies. I conclude by presenting a research agenda for studying the impact of A.I. in society; and researching effective socio-technical solutions in favor of the future of work and countering techno-authoritarianism.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Saiph Savage conducts research in the intersection of Human Computer Interaction, A.I., and Civic Technology. She is one of the 35 Innovators under 35 by the MIT Technology Review, a Google Anita Borg Scholarship recipient, and a fellow at the Center for Democracy & Technology. Her work has been covered in the BBC, Deutsche Welle, and the New York Times, as well as published in top venues such as ACM CHI, CSCW, and AAAI ICWSM, where she has also won honorable mention awards. Dr. Savage has been awarded grants from the National Science Foundation, the United Nations, industry, and has also formalized new collaborations with Federal and local Governments where she is driving them to adopt Human Centered Design and A.I. to deliver better experiences and government services to citizens. Dr. Savage has opened the research area of Human Computer Interaction at West Virginia University, and Saiph's students have obtained fellowships and internships in industry (Facebook Research, Twitch Research, and Microsoft Research) as well as academia (Oxford Internet Institute). Saiph holds a bachelor's degree in Computer Engineering from the National Autonomous University of Mexico (UNAM), and a master's and Ph.D. in Computer Science from the University of California, Santa Barbara (UCSB). Dr. Savage currently works at the University of Washington; previously she was a Visiting Professor at Carnegie Mellon University (CMU). Additionally, Dr. Savage has been a tech worker at Microsoft Bing, Intel Labs, and a crowd research worker at Stanford.

    Host: Bistra Dilkina

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Sanghamitra Dutta (Carnegie Mellon University) - Reliable Machine Learning for High-Stakes Applications: Approaches Using Information Theory

    Mon, Mar 08, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Sanghamitra Dutta , Carnegie Mellon University

    Talk Title: Reliable Machine Learning for High-Stakes Applications: Approaches Using Information Theory

    Series: CS Colloquium

    Abstract: How do we make machine learning (ML) algorithms not only ethical, but also intelligible, explainable, and reliable? This is particularly important today as ML enters high-stakes applications such as hiring and education, often adversely affecting people's lives with respect to gender, race, etc. Identifying bias/disparity in a model's decision is often insufficient. We really need to dig deeper and bring in an understanding of anti-discrimination laws. For instance, Title VII of the US Civil Rights Act includes a subtle and important aspect that has implications for the ML models being used today: Disparities in hiring that can be explained by a business necessity are exempt. E.g., disparity arising due to code-writing skills may be deemed exempt for a software engineering job, but the disparity due to an aptitude test may not be (e.g. Griggs v. Duke Power '71). This leads us to a question that bridges the fields of fairness, explainability, and law: How can we identify and explain the sources of disparity in ML models, e.g., did the disparity arise due to the critical business necessities or not? In this talk, I propose the first systematic measure of "non-exempt disparity," i.e., the illegal bias which cannot be explained by business necessities. To arrive at a measure for the non-exempt disparity, I adopt a rigorous axiomatic approach that brings together concepts in information theory, in particular, an emerging body of work called Partial Information Decomposition, with causal inference tools. This quantification allows one to audit a firm's hiring practices, to check if they are compliant with the law. This may also allow one to correct the disparity by better explaining the source of the bias, also providing insights into accuracy-bias tradeoffs.

    My research bridges reliability in learning with reliability in computing, which has led to an emerging interdisciplinary area called "coded computing". Towards the end of this talk, I will also provide an overview of some of my results on coded reliable computing that addresses long-standing computational challenges in large-scale distributed machine learning (namely, stragglers, faults, failures) using tools from coding theory, optimization, and queueing.

    This lecture satisfies requirements for CSCI 591: Research Colloquium


    Biography: Sanghamitra Dutta (B. Tech. IIT Kharagpur) is a Ph.D. candidate at Carnegie Mellon University, USA. Her research interests revolve around machine learning, information theory, and statistics. She is currently focused on addressing the emerging reliability issues in machine learning concerning fairness, explainability, and law with recent publications at AAAI'20, ICML'20 (also featured in New Scientist and CMU Engineering News). In her prior work, she has also examined problems in reliable computing, proposing novel algorithmic solutions for large-scale distributed machine learning in the presence of faults and failures, using tools from coding theory (an emerging area called "coded computing"). Her results on coded computing address problems that have been open for several decades and have received substantial attention from across communities (published at IEEE Transactions on Information Theory'19,'20, NeurIPS'16, AISTATS'18, IEEE BigData'18, ICML Workshop Spotlight'19, ISIT'17,'18, Proceedings of IEEE'20 along with two pending patents). She is a recipient of the 2020 Cylab Presidential Fellowship, 2019 K&L Gates Presidential Fellowship, 2019 Axel Berny Presidential Graduate Fellowship, 2017 Tan Endowed Graduate Fellowship, 2016 Prabhu and Poonam Goel Graduate Fellowship, the 2015 Best Undergraduate Project Award at IIT Kharagpur, and the 2014 HONDA Young Engineer and Scientist Award. She has also pursued research internships at IBM Research and Dataminr.

    Host: Bistra Dilkina

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Dani Yogatama (DeepMind) - Learning General Language Processing Agents

    Tue, Mar 09, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dani Yogatama, DeepMind

    Talk Title: Learning General Language Processing Agents

    Series: CS Colloquium

    Abstract: The ability to continuously learn and generalize to new problems quickly is a hallmark of general intelligence. Existing machine learning models work well when optimized for a particular benchmark, but they require many in-domain training examples (i.e., input-output pairs that are often costly to annotate), overfit to the idiosyncrasies of the benchmark, and do not generalize to out-of-domain examples. In contrast, humans are able to accumulate task-agnostic knowledge from multiple modalities to facilitate faster learning of new skills.

    In this talk, I will argue that obtaining such an ability for a language model requires significant advances in how we acquire, represent, and store knowledge in artificial systems. I will present two approaches in this direction: (i) an information theoretic framework that unifies several representation learning methods used in many domains (e.g., natural language processing, computer vision, audio processing) and allows principled constructions of new training objectives to learn better language representations; and (ii) a language model architecture that separates computation (information processing) in a large neural network and memory storage in a key-value database. I will conclude by briefly discussing a series of future research programs toward building a general linguistically intelligent agent.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Dani Yogatama is a staff research scientist at DeepMind. His research interests are in machine learning and natural language processing. He received his PhD from Carnegie Mellon University in 2015. He grew up in Indonesia and was a Monbukagakusho scholar in Japan prior to studying at CMU.

    Host: Xiang Ren

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Ranjay Krishna (Stanford University) - Visual Intelligence from Human Learning

    Tue, Mar 09, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Ranjay Krishna , Stanford University

    Talk Title: Visual Intelligence from Human Learning

    Series: CS Colloquium

    Abstract: At the core of human development is the ability to adapt to new, previously unseen stimuli. We comprehend new situations as a composition of previously seen information and ask one another for clarification when we encounter new concepts. Yet, this ability to go beyond the confounds of their training data remains an open challenge for artificial intelligence agents. My research designs visual intelligence to reason over new compositions and acquire new concepts by interacting with people. My talk will explore these challenges and present the two following lines of work:
    First, I will introduce scene graphs, a cognitively-grounded, compositional visual representation. I will discuss how to integrate scene graphs into a variety of computer vision tasks, enabling models to generalize to novel compositions from a few training examples. Since our introduction of scene graphs, the Computer Vision community has developed hundreds of scene graph models and utilized scene graphs to achieve state-of-the-art results across multiple core tasks, including object localization, captioning, image generation, question answering, 3D understanding, and spatio-temporal action recognition.
    Second, I will introduce a framework for socially situated learning. This framework pushes agents beyond traditional computer vision training paradigms and enables learning from human interactions in online social environments. I will showcase a real-world deployment of our agent, which learned to acquire new visual concepts by asking people targeted questions on social media. By interacting with over 230K people over 8 months, our agent learned to recognize hundreds of new concepts. This work demonstrates the possibility for agents to adapt and self-improve in real-world social environments.

    This lecture satisfies requirements for CSCI 591: Research Colloquium


    Biography: Ranjay Krishna is a 5th-year Ph.D. candidate at Stanford University, where he is co-advised by Fei-Fei Li and Michael Bernstein. His research lies at the intersection of computer vision and human-computer interaction; it draws on ideas from behavioral and social sciences to improve visual intelligence. His work has been recognized by the Christofer Stephenson Memorial award, as an Accell Innovation Scholar and by two Brown Institute for Media Innovation grants. His work has also been featured in Forbes magazine and in a PBS NOVA documentary. During his Ph.D., he re-designed Stanford's undergraduate Computer Vision course and currently also instructs the graduate Computer Vision course, Stanford's second largest course. He has a M.Sc. from Stanford University. Before that, he conferred a B.Sc. with a double major in Electrical Engineering and in Computer Science from Cornell University. In the past, he has interned at Google AI, Facebook AI Research, and Yahoo Research.

    Host: Ramakant Nevatia

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Distinguished Lecture: Jure Leskovec (Stanford University) - Mobility Networks for Modeling the Spread of COVID-19: Explaining Inequities and Informing Reopening

    Tue, Mar 09, 2021 @ 04:00 PM - 05:20 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Jure Leskovec, Stanford University

    Talk Title: Mobility Networks for Modeling the Spread of COVID-19: Explaining Inequities and Informing Reopening

    Series: Computer Science Distinguished Lecture Series

    Abstract: The COVID-19 pandemic dramatically changed human mobility patterns, necessitating epidemiological models which capture the effects of changes in mobility on virus spread. We introduce a metapopulation SEIR model that integrates fine-grained, dynamic mobility networks to simulate the spread of SARS-CoV-2 in 10 of the largest US metropolitan statistical areas. Derived from cell phone data, our mobility networks map the hourly movements of 98 million people from neighborhoods (census block groups, or CBGs) to points of interest (POIs) such as restaurants and religious establishments, connecting 57k CBGs to 553k POIs with 5.4 billion hourly edges. We show that by integrating these networks, a relatively simple SEIR model can accurately fit the real case trajectory, despite substantial changes in population behavior over time. Our model predicts that a small minority of "superspreader" POIs account for a large majority of infections and that restricting maximum occupancy at each POI is more effective than uniformly reducing mobility. Our model also correctly predicts higher infection rates among disadvantaged racial and socioeconomic groups solely from differences in mobility: we find that disadvantaged groups have not been able to reduce mobility as sharply, and that the POIs they visit are more crowded and therefore higher-risk. By capturing who is infected at which locations, our model supports detailed analyses that can inform more effective and equitable policy responses to COVID-19.

    Register in advance for this webinar at:

    https://usc.zoom.us/webinar/register/WN_UD7zYBdETsCyLBOiv2DoLw

    After registering, attendees will receive a confirmation email containing information about joining the webinar.

    This lecture satisfies requirements for CSCI 591: Research Colloquium.


    Biography: Jure Leskovec is Associate Professor of Computer Science at Stanford University, Chief Scientist at Pinterest, and investigator at Chan Zuckerberg Biohub. Dr. Leskovec was the co-founder of a machine learning startup Kosei, which was later acquired by Pinterest. His research focuses on machine learning and data mining large social, information, and biological networks. Computation over massive data is at the heart of his research and has applications in computer science, social sciences, marketing, and biomedicine. This research has won several awards including a Lagrange Prize, Microsoft Research Faculty Fellowship, the Alfred P. Sloan Fellowship, and numerous best paper and test of time awards. It has also been featured in popular press outlets such as the New York Times and the Wall Street Journal. Leskovec received his bachelor's degree in computer science from University of Ljubljana, Slovenia, PhD in machine learning from Carnegie Mellon University and postdoctoral training at Cornell University. You can follow him on Twitter at @jure.


    Host: Xiang Ren

    Webcast: https://usc.zoom.us/webinar/register/WN_UD7zYBdETsCyLBOiv2DoLw

    Location: Online Zoom Webinar

    WebCast Link: https://usc.zoom.us/webinar/register/WN_UD7zYBdETsCyLBOiv2DoLw

    Audiences: Everyone Is Invited

    Contact: Computer Science Department

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Hongyang Zhang (Toyota Technological Institute) - New Advances in (Adversarially) Robust and Secure Machine Learning

    Wed, Mar 10, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Hongyang Zhang , Toyota Technological Institute

    Talk Title: New Advances in (Adversarially) Robust and Secure Machine Learning

    Series: CS Colloquium

    Abstract: Deep learning models are often vulnerable to adversarial examples. In this talk, we will focus on robustness and security of machine learning against adversarial examples. There are two types of defenses against such attacks: 1) empirical and 2) certified adversarial robustness.

    In the first part of the talk, we will see the foundation of our winning system, TRADES, in the NeurIPS'18 Adversarial Vision Challenge in which we won 1st place out of 400 teams and 3,000 submissions. Our study is motivated by an intrinsic trade-off between robustness and accuracy: we provide a differentiable and tight surrogate loss for the trade-off using the theory of classification-calibrated loss. TRADES has record-breaking performance in various standard benchmarks and challenges, including the adversarial benchmark RobustBench, the NLP benchmark GLUE, the Unrestricted Adversarial Examples Challenge hosted by Google, and has motivated many new attacking methods powered by our TRADES benchmark.

    In the second part of the talk, to equip empirical robustness with certification, we study certified adversarial robustness by random smoothing. On one hand, we show that random smoothing on the TRADES-trained classifier achieves SOTA certified robustness when the perturbation radius is small. On the other hand, when the perturbation is large, i.e., independent of inverse of input dimension, we show that random smoothing is provably unable to certify L_infty robustness for arbitrary random noise distribution. The intuition behind our theory reveals an intrinsic difficulty of achieving certified robustness by "random noise based methods", and inspires new directions as potential future work.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Hongyang Zhang is a Postdoc fellow at Toyota Technological Institute at Chicago, hosted by Avrim Blum and Greg Shakhnarovich. He obtained his Ph.D. from CMU Machine Learning Department in 2019, advised by Maria-Florina Balcan and David P. Woodruff. His research interests lie in the intersection between theory and practice of machine learning, robustness and AI security. His methods won the championship or ranked top in various competitions such as the NeurIPS'18 Adversarial Vision Challenge (all three tracks), the Unrestricted Adversarial Examples Challenge hosted by Google, and the NeurIPS'20 Challenge on Predicting Generalization of Deep Learning. He also authored a book in 2017.

    Host: David Kempe

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Vered Shwartz (University of Washington) - Commonsense Knowledge and Reasoning in Natural Language

    Wed, Mar 10, 2021 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Vered Shwartz, University of Washington

    Talk Title: Commonsense Knowledge and Reasoning in Natural Language

    Series: CS Colloquium

    Abstract: Natural language understanding models are trained on a sample of the situations they may encounter. Commonsense and world knowledge, and language understanding and reasoning abilities can help them address unknown situations sensibly. This talk will discuss several lines of work addressing commonsense knowledge and reasoning in natural language. First, I will introduce a new paradigm for commonsense reasoning tasks with introspective knowledge discovery through a process of self-asking information seeking questions ("what is the definition of...") and answering them. Second, I will present work on nonmonotonic reasoning in natural language, a core human reasoning ability that has been studied in classical AI but mostly overlooked in modern NLP, including abductive reasoning (reasoning about plausible explanations), counterfactual reasoning (what if?) and defeasible reasoning (updating beliefs given additional information). Next, I will discuss how generalizing existing knowledge can help language understanding, and demonstrate it for noun compound paraphrasing (e.g. olive oil is "oil made of olives"). I will conclude with open problems and future directions in language, knowledge, and reasoning.

    This lecture satisfies requirements for CSCI 591: Research Colloquium.

    Biography: Vered Shwartz is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Yejin Choi. Vered's research interests are in NLP, AI, and machine learning, particularly focusing on commonsense knowledge and reasoning, computational semantics, discourse and pragmatics. Previously, Vered completed her PhD in Computer Science from Bar-Ilan University, under the supervision of Ido Dagan. Vered's work has been recognized with several awards, including The Eric and Wendy Schmidt Postdoctoral Award for Women in Mathematical and Computing Sciences, the Clore Foundation Scholarship, and an ACL 2016 outstanding paper award.

    Host: Xiang Ren

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Gedas Bertasius (Facebook AI) - Designing Video Models for Human Behavior Understanding

    Thu, Mar 11, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Gedas Bertasius, Facebook AI

    Talk Title: Designing Video Models for Human Behavior Understanding

    Series: CS Colloquium

    Abstract: Many modern computer vision applications require extracting core attributes of human behavior such as attention, action, or intention. Extracting such behavioral attributes requires powerful video models that can reason about human behavior directly from raw video data. To design such models we need to answer the following three questions: how do we (1) model videos, (2) learn from videos, and lastly, (3) use videos to predict human behavior?

    In this talk I will present a series of methods to answer each of these questions. First, I will introduce TimeSformer, the first convolution-free architecture for video modeling built exclusively with self-attention. It achieves the best reported numbers on major action recognition benchmarks while also being more efficient than state-of-the-art 3D CNNs. Afterwards, I will present COBE, a new large-scale framework for learning contextualized object representations in settings involving human-object interactions. Our approach exploits automatically-transcribed speech narrations from instructional YouTube videos, and it does not require manual annotations. Lastly, I will introduce a self-supervised learning approach for predicting a basketball player's future motion trajectory from an unlabeled collection of first-person basketball videos.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Gedas Bertasius is a postdoctoral researcher at Facebook AI working on computer vision and machine learning problems. His current research focuses on topics of video understanding, first-person vision, and multi-modal deep learning. He received his Bachelors Degree in Computer Science from Dartmouth College, and a Ph.D. in Computer Science from the University of Pennsylvania. His recent work was nominated for the CPVR 2020 best paper award.

    Host: Ramakant Nevatia

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Jiaming Song (Stanford University) - Beyond Function Approximation: Compression, Inference, and Generation via Supervised Learning

    Thu, Mar 11, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Jiaming Song, Stanford University

    Talk Title: Beyond Function Approximation: Compression, Inference, and Generation via Supervised Learning

    Series: CS Colloquium

    Abstract: Supervised learning methods have advanced considerably thanks to deep function approximators. However, important problems such as compression, probabilistic inference, and generative modeling cannot be directly addressed by supervised learning. At the core, these problems involve estimating (and optimizing) a suitable notion of distance between two probability distributions, which is challenging in high-dimensional spaces. In this talk, I will propose techniques to estimate and optimize divergences more effectively by leveraging advances in supervised learning. I will describe an algorithm for estimating mutual information that approaches a fundamental limit of all valid lower bound estimators and can empirically compress neural networks by up to 70% without losing accuracy. I will also show how these techniques can be used to accelerate probabilistic inference algorithms that have been developed for decades by nearly 10x, improve generative modeling and infer suitable rewards for sequential decision making.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Jiaming Song is a fifth-year Ph.D. candidate in the Computer Science Department at Stanford University, advised by Stefano Ermon. His research focuses on learning and inference algorithms for deep probabilistic models with applications in unsupervised representation learning, generative modeling, and inverse reinforcement learning. He received his B.Eng degree in Computer Science from Tsinghua University in 2016. He was a recipient of the Qualcomm Innovation Fellowship.

    Host: Bistra Dilkina

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Swabha Swayamdipta (Allen Institute for AI) - Addressing Biases for Robust, Generalizable AI

    Thu, Mar 11, 2021 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Swabha Swayamdipta, Allen Institute for AI

    Talk Title: Addressing Biases for Robust, Generalizable AI

    Series: CS Colloquium

    Abstract: Artificial Intelligence has made unprecedented progress in the past decade. However, there still remains a large gap between the decision-making capabilities of humans and machines. In this talk, I will investigate two factors to explain why. First, I will discuss the presence of undesirable biases in datasets, which ultimately hurt generalization. I will then present bias mitigation algorithms that boost the ability of AI models to generalize to unseen data. Second, I will explore task-specific prior knowledge which aids robust generalization, but is often ignored when training modern AI architectures. Throughout this discussion, I will focus my attention on language applications, and will show how certain underlying structures can provide useful inductive biases for inferring meaning in natural language. I will conclude with a discussion of how the broader framework of dataset and model biases will play a critical role in the societal impact of AI, going forward.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Swabha Swayamdipta is a postdoctoral investigator at the Allen Institute for AI, working with Yejin Choi. Her research focuses on natural language processing, where she explores dataset and linguistic structural biases, and model interpretability. Swabha received her Ph.D. from Carnegie Mellon University, under the supervision of Noah A. Smith and Chris Dyer. During most of her Ph.D. she was a visiting student at the University of Washington. She holds a Masters degree from Columbia University, where she was advised by Owen Rambow. Her research has been published at leading NLP and machine learning conferences, and has received an honorable mention for the best paper at ACL 2020.

    Host: Xiang Ren

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Amy Zhang (McGill University) - Exploiting latent structure and bisimulation metrics for better generalization in reinforcement learning

    Mon, Mar 15, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Amy Zhang, McGill University

    Talk Title: Exploiting latent structure and bisimulation metrics for better generalization in reinforcement learning

    Series: CS Colloquium

    Abstract: The advent of deep learning has shepherded unprecedented progress in various fields of machine learning. Despite recent advances in deep reinforcement learning (RL) algorithms, however, there is no method today that exhibits anywhere near the generalization that we have seen in computer vision and NLP. Indeed, one might ask whether deep RL algorithms are even capable of the kind of generalization that is needed for open-world environments. This challenge is fundamental and will not be solved with incremental algorithmic advances.

    In this talk, we propose to incorporate different assumptions that better reflect the real world and allow the design of novel algorithms with theoretical guarantees to address this fundamental problem. We first present how state abstractions can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn state abstractions that both provide for effective downstream control and invariance to task-irrelevant details. We use bisimulation metrics to quantify behavioral similarity between states, and learn robust latent representations which encode only the task-relevant information from observations. We provide theoretical guarantees for the learned approximate abstraction and extend this notion to families of tasks with varying dynamics.

    This lecture satisfies requirements for CSCI 591: Research Colloquium


    Biography: I am a final year PhD candidate at McGill University and the Mila Institute, co-supervised by Profs. Joelle Pineau and Doina Precup. I am also a researcher at Facebook AI Research. My work focuses on bridging theory and practice through learning approximate state abstractions and learning representations for generalization in reinforcement learning. I previously obtained an M.Eng. in EECS and dual B.Sci. degrees in Mathematics and EECS from MIT.

    Host: Sven Koenig

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Mai ElSherief (Georgia Institute of Technology) - Computational Methods for Identifying Deviant Content in Online Media Ecosystems

    Mon, Mar 15, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Mai ElSherief, Georgia Institute of Technology

    Talk Title: Computational Methods for Identifying Deviant Content in Online Media Ecosystems

    Series: CS Colloquium

    Abstract: In recent years, the pervasive adoption of social media has created an ecosystem populated by a pandemonium of opinion, true and false information, and an unprecedented quantity of data on many topics. While online information ecosystems provide freedom of expression and give voice to individuals, they have also suffered a wave of disorder due to the prevalence of malevolent online misuse, manifested as online harassment, cyberbullying, and hate speech; and online misinformation, such as fake news and medical misinformation movements. In this talk, I will present language-centric approaches for improving online hate speech detection and characterization. I will then showcase a human-machine mixed-initiative that aims at investigating and detecting online misinformation surrounding Opioid Use Disorders in collaboration with the Centers for Disease Control and Prevention.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Mai ElSherief is a Postdoctoral Fellow at the School of Interactive Computing at Georgia Tech. Her research interests lie at the intersection of Social Computing, Natural Language Processing, and Online Social Networks, specifically causes of social good. In her research, she adopts Natural Language Processing and Machine Learning methods to examine human behavior pertaining to online abuse, biases, public health intelligence, and community wellbeing. Prior to her Postdoctoral Fellowship, she received Ph.D. from the Computer Science department at UC, Santa Barbara within the Mobility Management and Networking (MOMENT) Lab along with a Certificate in College and University Teaching (CCUT) to demonstrate superior competence and experience in preparation for teaching at the university or college level.

    Her research on computationally understanding the psychological impacts of active shooting drills on K-12 school communities received press coverage by NBC, the Hill, and 11Alive. She has been a summer research intern at the Berkman Klein Center for Internet and Society at Harvard University understanding anti-immigration sentiment and the discursive practices of online hate groups. She has been selected as a 2020 UC Berkley EECS Rising Stars Participant. She was also awarded the UCSB 2019 CS Outstanding Graduate Student and the 2017 Fiona and Michael Goodchild Graduate mentoring award for her distinguished research mentoring of undergraduate students.


    Host: Bistra Dilkina

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Dhanya Sridhar (Columbia University) - Beyond prediction: NLP for causal inference

    Tue, Mar 16, 2021 @ 09:00 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dhanya Sridhar, Columbia University

    Talk Title: Beyond prediction: NLP for causal inference

    Series: CS Colloquium

    Abstract: Why do some misleading articles go viral? Does partisan speech affect how people behave? Many pressing questions require understanding the effects of language. These are causal questions: did an article's writing style cause it to go viral or would it have gone viral anyway? With text data from social media and news sites, we can build predictors with natural language processing (NLP) techniques but these methods can confuse correlation with causation. In this talk, I discuss my recent work on NLP methods for making causal inferences from text. Text data present unique challenges for disentangling causal effects from non-causal correlations. I present approaches that address these challenges by extending black box and probabilistic NLP methods. I outline the validity of these methods for causal inference, and demonstrate their applications to online forum comments and consumer complaints. I conclude with my research vision for a data analysis pipeline that bridges causal thinking and machine learning to enable better decision-making and scientific understanding.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Dhanya Sridhar is a postdoctoral researcher in the Data Science Institute at Columbia University. She completed her PhD at the University of California Santa Cruz. Her current research is at the intersection of machine learning and causal inference, focusing on applications to social science. Her thesis research focused on probabilistic models of relational data.

    Host: Fei Sha

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Aloni Cohen (Boston University) - Bridging the Divide Between Computer Science and Law

    Tue, Mar 16, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Aloni Cohen , Boston University

    Talk Title: Bridging the Divide Between Computer Science and Law

    Series: CS Colloquium

    Abstract: Seriously engaging with law and policy exposes new computer science research directions that also have policy consequences. My work aims to understand and resolve the tensions between the theory of privacy and cryptography on the one hand and the privacy laws that govern its eventual real-world context. In this talk, I'll describe work that tackles three broad questions: How can we bridge the basic concepts of data privacy in computer science and law? How can privacy theory have a positive impact on policy? How can we incorporate legal powers and constraints into our cryptographic threat models for better cryptography?

    This lecture satisfies requirements for CSCI 591: Research Colloquium.


    Biography: Aloni Cohen a Postdoctoral Associate at Boston University, with a joint appointment at the Hariri Institute for Computing and the School of Law. His research explores the interplay between theoretical cryptography, privacy, law, and policy. Aloni earned his PhD in electrical engineering and computer science at MIT where he was advised by Shafi Goldwasser and supported by a Facebook Fellowship and an NSF Graduate Student Fellowship. Aloni is a former affiliate at the Berkman Klein Center for Internet & Society and a Fellow at the Aspen Tech Policy Hub.

    Host: Aleksandra Korolova

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Amy Pavel (Carnegie Mellon University / AI/ML Apple) - Human-AI Systems for Creating and Understanding Videos

    Thu, Mar 18, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Amy Pavel, Carnegie Mellon University / AI/ML Apple

    Talk Title: Human-AI Systems for Creating and Understanding Videos

    Series: CS Colloquium

    Abstract: Video is becoming a core medium for communicating a wide range of content, including educational lectures, vlogs, and how-to tutorials. While videos are engaging and informative, they lack the familiar and useful affordances of text for browsing, skimming,and flexibly transforming information. This severely limits who can interact with video content and how they can interact with it, makes editing a laborious process, and means that much of the information in videos is not accessible to everyone.

    But, what future systems will make videos useful for all users?

    In this talk, I'll share my work creating interactive Human-AI systems that combine the benefits of multiple mediums of communication (e.g., text, video, and audio) in two key areas: 1) helping domain experts find content of interest in videos, and 2) making videos accessible to people who are blind or have visual impairments. First, I'll discuss core challenges of finding information in videos from interviews with domain experts and people with disabilities. Then, I will present new systems that leverage AI, and the results of technical and user evaluations that demonstrate system efficacy. I will conclude with how hybrid HCI-AI breakthroughs will make digital communication more effective and accessible in the future, and how new interactions can help us to realize the full potential of recent AI/ML advances.


    Biography: Amy Pavel is a Postdoctoral Fellow at Carnegie Mellon University and a Research Scientist in AI/ML at Apple. Her research explores AI-driven interactive techniques for making digital communication effective and accessible for all. Her work creating Human-AI systems to improve communication has appeared at ACM/IEEE conferences including UIST, CHI, ASSETS, and VR. She recently served as an associate chair for the UIST and CHI program committees, received 2 best paper nominations at CHI, and was selected as a Rising Star in EECS. She previously received her Ph.D. in Computer Science at UC Berkeley, where her work developing interactive video abstractions was supported by an NDSEG fellowship and an EECS Excellence Award. Read more about her research at: https://urldefense.com/v3/__https://amypavel.com/__;!!LIr3w8kk_Xxm!_kp1txvo_2fY97o1Ag_-lE6oKo_wqfl1jqPYTl7GDZDnH5NsjUdzasIfRnuxsBo$



    Host: Heather Culbertson / Bistra Dilkina

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CAIS Seminar: Maria Rodriguez (University at Buffalo) - The Root of Algorithmic Bias and How to Deal With it

    Wed, Mar 24, 2021 @ 01:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Maria Rodriguez, University at Buffalo

    Talk Title: The Root of Algorithmic Bias and How to Deal With it

    Series: USC Center for Artificial Intelligence in Society (CAIS) Seminar Series

    Abstract: In this talk, Dr. Rodriguez describes what she sees as the central issue undergirding academic conversations concerning bias in algorithmic output. Laying out this cause in plain terms, Dr. Rodriguez offers actionable mitigation strategies for individuals, groups and organizations invested in producing tech solutions for social good.

    Register in advance for this webinar at:

    https://usc.zoom.us/webinar/register/WN_lgXhhV2zR5ShvC70HyEKUg

    After registering, attendees will receive a confirmation email containing information about joining the webinar.

    This lecture satisfies requirements for CSCI 591: Research Colloquium.


    Biography: Dr. Rodriguez is an Assistant Professor at the School of Social Work, University at Buffalo (SUNY); a Faculty Associate at the BerkmanKlein Center for Internet and Society at Harvard University; a Faculty Fellow at the Center for Democracy and Technology; as well as a member of the Twitter Academic Research Advisory Board. Her work lies at the intersection of computational social science, demography, and social policy.


    Host: USC Center for Artificial Intelligence in Society (CAIS)

    Webcast: https://usc.zoom.us/webinar/register/WN_lgXhhV2zR5ShvC70HyEKUg

    Location: Online Zoom Webinar

    WebCast Link: https://usc.zoom.us/webinar/register/WN_lgXhhV2zR5ShvC70HyEKUg

    Audiences: Everyone Is Invited

    Contact: Computer Science Department

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Michał Dereziński (University of California, Berkeley) - Bridging algorithmic and statistical randomness in machine learning

    Thu, Mar 25, 2021 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Michał Dereziński , University of California, Berkeley

    Talk Title: Bridging algorithmic and statistical randomness in machine learning

    Series: CS Colloquium

    Abstract: Randomness is a key resource in designing efficient algorithms, and it is also a fundamental modeling framework in statistics and machine learning. Methods that lie at the intersection of algorithmic and statistical randomness are at the forefront of modern data science. In this talk, I will discuss how statistical assumptions affect the bias-variance trade-offs and performance characteristics of randomized algorithms for, among others, linear regression, stochastic optimization, and dimensionality reduction. I will also present an efficient algorithmic framework, called joint sampling, which is used to both predict and improve the statistical performance of machine learning methods, by injecting carefully chosen correlations into randomized algorithms.

    In the first part of the talk, I will focus on the phenomenon of inversion bias, which is a systematic bias caused by inverting random matrices. Inversion bias is a significant bottleneck in parallel and distributed approaches to linear regression, second order optimization, and a range of statistical estimation tasks. Here, I will introduce a joint sampling technique called Volume Sampling, which is the first method to eliminate inversion bias in model averaging. In the second part, I will demonstrate how the spectral properties of data distributions determine the statistical performance of machine learning algorithms, going beyond worst-case analysis and revealing new phase transitions in statistical learning. Along the way, I will highlight a class of joint sampling methods called Determinantal Point Processes (DPPs), popularized in machine learning over the past fifteen years as a tractable model of diversity. In particular, I will present a new algorithmic technique called Distortion-Free Intermediate Sampling, which drastically reduced the computational cost of DPPs, turning them into a practical tool for large-scale data science.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Michał Dereziński is a postdoctoral fellow in the Department of Statistics at the University of California, Berkeley. Previously, he was a research fellow at the Simons Institute for the Theory of Computing (Fall 2018, Foundations of Data Science program). He obtained his Ph.D. in Computer Science at the University of California, Santa Cruz, advised by professor Manfred Warmuth, where he received the Best Dissertation Award for his work on sampling methods in statistical learning. Michał's current research is focused on developing scalable randomized algorithms with robust statistical guarantees for machine learning, data science and optimization. His work on reducing the cost of interpretability in dimensionality reduction received the Best Paper Award at the Thirty-fourth Conference on Neural Information Processing Systems. More information is available at: https://users.soe.ucsc.edu/~mderezin/.

    Host: David Kempe

    Audiences: By invitation only.

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File