Logo: University of Southern California

Events Calendar



Select a calendar:



Filter November Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Conferences, Lectures, & Seminars
Events for November

  • CS Colloquium: Fei Miao - Learning and Control for Safety, Efficiency, and Resiliency of Embodied AI

    Wed, Nov 01, 2023 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Fei Miao, University of Connecticut

    Talk Title: Learning and Control for Safety, Efficiency, and Resiliency of Embodied AI

    Series: CS Colloquium

    Abstract: With rapid evolution of sensing, communication, and computation, integrating learning and control presents significant Embodied AI opportunities. However, current decision-making frameworks lack comprehensive understanding of the tridirectional relationship among communication, learning and control, posing challenges for multi-agent systems in complex environments. In the first part of the talk, we focus on learning and control with communication capabilities. We design an uncertainty quantification method for collaborative perception in connected autonomous vehicles (CAVs). Our findings demonstrate that communication among multiple agents can enhance object detection accuracy and reduce uncertainty. Building upon this, we develop a safe and scalable deep multi-agent reinforcement learning (MARL) framework that leverages shared information among agents to improve system safety and efficiency. We validate the benefits of communication in MARL, particularly in the context of CAVs in challenging mixed traffic scenarios. We incentivize agents to communicate and coordinate with a novel reward reallocation scheme based on Shapley value for MARL. Additionally, we present our theoretical analysis of robust MARL methods under state uncertainties, such as uncertainty quantification in the perception modules or worst-case adversarial state perturbations. In the second part of the talk, we briefly outline our research contributions on robust MARL and data-driven robust optimization for sustainable mobility. We also highlight our research results concerning CPS security. Through our findings, we aim to advance Embodied AI and CPS for safety, efficiency, and resiliency in dynamic environments.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium
     

    Biography: Fei Miao is Pratt & Whitney Associate Professor of the Department of Computer Science and Engineering, a Courtesy Faculty of the Department of Electrical & Computer Engineering, University of Connecticut, where she joined in 2017. She is affiliated to the Institute of Advanced Systems Engineering and Eversource Energy Center. She was a postdoc researcher at the GRASP Lab and the PRECISE Lab of Upenn from 2016 to 2017. She received Ph.D. degree and the Best Doctoral Dissertation Award in Electrical and Systems Engineering, with a dual M.S. degree in Statistics from the University of Pennsylvania in 2016. She received the B.S. degree in Automation from Shanghai Jiao Tong University in 2010. Her research focuses on multi-agent reinforcement learning, robust optimization, uncertainty quantification, and game theory, to address safety, efficiency, robustness, and security challenges of Embodied AI and CPS, for systems such as connected autonomous vehicles, sustainable and intelligent transportation systems, and smart cities.  Dr. Miao is a receipt of the NSF CAREER award and a couple of other awards from NSF. She received the Best Paper Award and Best Paper Award Finalist at the 12th and 6th ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS) in 2021 and 2015, Best paper Award at the 2023 AAAI DACC workshop, respectively.

    Host: Heather Culbertson

    Location: Ronald Tutor Hall of Engineering (RTH) - 115

    Audiences: Everyone Is Invited

    Contact: Melissa Ochoa

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Adaptive Attention: Bringing Active Vision into the Camera - Prof. Sanjeev Koppal

    Thu, Nov 02, 2023 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Sanjeev Koppal, University of Florida

    Talk Title: Adaptive Attention: Bringing Active Vision into the Camera

    Abstract: Most cameras today capture images without considering scene content. In contrast, animal eyes have fast mechanical movements that control how the scene is imaged in detail by the fovea, where visual acuity is highest. The prevalence of active vision during biological imaging, and the wide variety of it, makes it very clear that this is an effective visual design strategy. In this talk, I cover our recent work on creating *both* new camera designs and novel vision algorithms to enable adaptive and selective active vision and imaging inside cameras and sensors.

    Biography: Sanjeev J. Koppal is an Associate Professor at the University of Florida’s Electrical and Computer Engineering Department and is a Kent and Linda Fuchs Faculty Fellow. He also holds a UF Term Professorship for 2021-23. Sanjeev is the Director of the FOCUS Lab at UF. Since 2022, Sanjeev has been an Amazon Scholar with Amazon Robotics. Prior to joining UF, he was a researcher at the Texas Instruments Imaging R&D lab. Sanjeev obtained his Masters and Ph.D. degrees from the Robotics Institute at Carnegie Mellon University. After CMU, he was a postdoctoral research associate in the School of Engineering and Applied Sciences at Harvard University. He received his B.S. degree from the University of Southern California in 2003 as a Trustee Scholar. He is a co-author on best student paper awards for ECCV 2016 and NEMS 2018, and work from his FOCUS lab was a CVPR 2019 best-paper finalist. Sanjeev won an NSF CAREER award in 2020 and is an IEEE Senior Member and an Optica Senior Member. His interests span computer vision, computational photography and optics, novel cameras and sensors, 3D reconstruction, physics-based vision, and active illumination

    Host: Gaurav S. Sukhatme

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: Melissa Ochoa

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CAIS Webinar: A Carative Approach to AI Governance

    CAIS Webinar: A Carative Approach to AI Governance

    Thu, Nov 09, 2023 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dr. Kush R. Varshney, IBM Research

    Talk Title: A Carative Approach to AI Governance

    Abstract: In recent times, we often hear a call for the governance of AI systems, but what does that really mean? In this talk, I will first adopt a control theory perspective to explain governance that determines the reference input via value alignment, data scientists acting as the controller to meet the values in a machine learning system, and facts captured in transparent documentation as the feedback signal. I will then adopt a nursing theory perspective to explain how the control theory perspective lacks caring and the need for a carative approach that starts with the real world problem as experienced by the most vulnerable people. I will conclude with an example of a project on using machine learning to evaluate applicants for home solar panel systems in rural India.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Kush R. Varshney is a distinguished research scientist and senior manager at IBM Research -“ T. J. Watson Research Center where he leads the Trustworthy Machine Intelligence department and the IBM Science for Social Good initiative

    Register for the Zoom webinar here: https://usc.zoom.us/webinar/register/WN_p39kMQKnTaKdaJRKNox5uQ

    Host: CAIS

    More Info: https://usc.zoom.us/webinar/register/WN_p39kMQKnTaKdaJRKNox5uQ

    Audiences: Everyone Is Invited

    Contact: Melissa Ochoa

    Event Link: https://usc.zoom.us/webinar/register/WN_p39kMQKnTaKdaJRKNox5uQ

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CAIS Webinar: Sidestepping the Black-Box: A New Paradigm for Explainable AI

    CAIS Webinar: Sidestepping the Black-Box: A New Paradigm for Explainable AI

    Thu, Nov 16, 2023 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dr. Amulya Yadav, PNC Technologies Career Development Assistant Professor (Penn State University)

    Talk Title: Sidestepping the Black-Box: A New Paradigm for Explainable AI

    Abstract: Existing work in Explainable Artificial Intelligence (XAI) has been focused on developing techniques to interpret decisions made by pre trained and black box machine learning (ML) models. This black box assumption is reasonable in a lot of settings, e.g., explaining Amazons recommender systems requires assuming a black box model because it is infeasible to assume glass box access to Amazons proprietary models, etc. However, I argue that in many real world settings (especially those that pertain to low resource domains), the black box assumption is unnecessary, undesirable, and often, overly limiting. In this talk, I motivate the need to move away from the black box assumption of XAI by discussing two deployed use cases of responsible AI research i. automated tele triage for poor pregnant women in Kenya, and ii. raising awareness of HIV among homeless youth in Los Angeles. Through my experiences with the deployment of AI in these domains, we will argue the need for a new paradigm in explainable AI. Next, I will discuss two new frameworks i. CounterNet, a novel end to end learning framework which integrates Machine Learning (ML) model training and the generation of corresponding counterfactual (CF) explanations into a single end to end pipeline and ii. RoCourseNet, a training framework that jointly optimizes predictions and recourses that are robust to future data shifts.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Amulya Yadav is the PNC Technologies Career Development Assistant Professor in the College of Information Sciences and Technology at Penn State University, where he serves as Director of the RAISE Research Lab. He is also the Associate Director (Programs) at the Center for Socially Responsible AI at Penn State. Amulyas research work in the field of Responsible AI and Artificial Intelligence for Social Good focuses on developing theoretically grounded approaches to real world problems that can have an impact in the field. His algorithms have been deployed in the real world, particularly in the field of public health and wildlife protection. Amulya is a recipient of the AAMAS 2016 Best Student Paper Award, the AAAI 2017 Best Video and Best Student Video Award, the IDEAS 2016 Most Visionary Paper Award, and the AAMAS 2017 Best Paper Award nomination. His work has also been highlighted by Mashable.com as one of 26 incredible innovations that improved the world in 2015.

    Amulya holds a Ph.D. in Computer Science from the University of Southern California, and a B. Tech. in Computer Science and Engineering from Indian Institute of Technology (IIT), Patna.

    Register for the Zoom webinar here: https://usc.zoom.us/webinar/register/WN_nPykyeAAQH-B3R6p5-kezg

    Host: CAIS

    More Info: https://usc.zoom.us/webinar/register/WN_nPykyeAAQH-B3R6p5-kezg

    Audiences: Everyone Is Invited

    Contact: Melissa Ochoa

    Event Link: https://usc.zoom.us/webinar/register/WN_nPykyeAAQH-B3R6p5-kezg

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloquium: Niloufar Salehi (UC Berkeley) - Designing Reliable Human-AI Interactions

    CS Colloquium: Niloufar Salehi (UC Berkeley) - Designing Reliable Human-AI Interactions

    Tue, Nov 28, 2023 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Niloufar Salehi, UC Berkeley

    Talk Title: Designing Reliable Human-AI Interactions

    Abstract: How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation or probabilistic genotyping. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect. In this talk, I will focus on the case of health-care communication to share approaches to improving the reliability of ML-based systems by designing actionable strategies for users to gauge reliability and recover from potential errors.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Niloufar Salehi is an assistant professor in the School of Information at UC, Berkeley and faculty member of Berkeley AI Research (BAIR). Her research interests are in social computing, human-centered AI, and more broadly, human-computer interaction (HCI). Her research is in close collaboration with partners and domain experts spanning education to healthcare to restorative justice. Her work has been published and received awards in premier venues including ACM CHI and CSCW and has been covered in VentureBeat, Wired, and the Guardian. She is a W. T. Grant Foundation scholar. She received her PhD in computer science from Stanford University in 2018.

    Host: Souti Chattopadhyay

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CSCI 591 Colloquium: Prof. Yisen Wang (Peking University) - Theoretical Understanding of Self-Supervised Learning

    Wed, Nov 29, 2023 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Yisen Wang, Peking University

    Talk Title: Theoretical Understanding of Self-Supervised Learning

    Abstract: Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on various tasks. The existing SSL research mostly focuses on improving the empirical performance without a theoretical foundation. While the proposed SSL approaches are empirically effective on benchmarks, they are not well understood from a theoretical perspective. In this talk, I will introduce a series of our recent work on theoretical understanding of SSL, particularly on contrastive learning and masked autoencoders.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Yisen Wang is an assistant professor at Peking University. His research interests include machine learning theory and algorithms, focusing on adversarial robustness, graph learning, and weak/self-supervised learning theory. He has published more than 50 top academic papers in the field of machine learning, including ICML, NeurIPS, ICLR, etc., and many of them have been selected as Oral or Spotlight. He has won the ECML 2021 Best Paper Award.

    Host: Yue Zhao

    More Info: https://usc.zoom.us/j/97892066727?pwd=LytmZmltbDk5aWZtZHdKTjZyclI1QT09

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: Chair's Assistant

    Event Link: https://usc.zoom.us/j/97892066727?pwd=LytmZmltbDk5aWZtZHdKTjZyclI1QT09

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File