Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Events for the 5th week of March

  • USC Symposium on Frontiers of Generative AI Models in Science and Society

    Mon, Mar 25, 2024 @ 08:30 AM - 06:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Various, USC Machine Learning Center

    Talk Title: USC Symposium on Frontiers of Generative AI Models in Science and Society

    Abstract: The USC Machine Learning Center and Computer Science Department is excited to host the syposium on "Frontiers of Generative AI Models in Science and Society". Experts in generative AI models will discuss recent progresses and their applications in science and soceity.    
     
    Keynote Speakers: Alessandro Vespignani (Northeastern University), Nitesh Chawla (Notre Dame), Yizhou Sun (UCLA), & Jian Ma (CMU)    
     
    Spotlight Speakers: Jieyu Zhao, Robin Jia, Yue Wang, Vatsal Sharan, & Ruishan Liu (USC Thomas Lord Department of Computer Science)

    Host: USC Machine Learning Center

    More Info: https://www.eventbrite.com/e/usc-symposium-on-frontiers-of-generative-ai-models-in-science-and-society-tickets-860269668737?aff=oddtdtcreator

    Location: Michelson Center for Convergent Bioscience (MCB) - 101

    Audiences: Everyone Is Invited

    Contact: Thomas Lord Department of Computer Science

    Event Link: https://www.eventbrite.com/e/usc-symposium-on-frontiers-of-generative-ai-models-in-science-and-society-tickets-860269668737?aff=oddtdtcreator

    OutlookiCal
  • CS Colloquium: Junzhe Zhang - Towards Causal Reinforcement Learning

    Mon, Mar 25, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Junzhe Zhang, Columbia University

    Talk Title: Towards Causal Reinforcement Learning

    Abstract: Causal inference provides a set of principles and tools that allows one to combine data and knowledge about an environment to reason with questions of a counterfactual nature - i.e., what would have happened if the reality had been different - even when no data of this unrealized reality is currently available. Reinforcement learning provides a collection of methods that allows the agent to reason about optimal decision-making under uncertainty by trial and error - i.e., what would the consequences (e.g., subsequent rewards, states) be had the action been different? While these two disciplines have evolved independently and with virtually no interaction, they operate over various aspects of the same building block, i.e., counterfactual reasoning, making them umbilically connected.   This talk will present a unified theoretical framework, called causal reinforcement learning, that explores the nuanced interplays between causal inference and reinforcement learning. I will discuss a recent breakthrough in partial identification that allows one to infer unknown causal effects from a combination of model assumptions and available data. Delving deeper, I will then demonstrate how this method could be applicable to address some practical challenges in classic reinforcement learning tasks, including robust off-policy evaluation from confounded observations and accelerating online learning with offline data.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Junzhe Zhang is a postdoctoral research scientist in the Causal AI lab at Columbia University. He obtained his doctoral degree in Computer Science at Columbia University, advised by Elias Bareinboim. His research centers on causal inference theory and its applications in reinforcement learning, algorithmic fairness, and explainability. His works have been selected for oral presentations in top refereed venues such as NeurIPS.

    Host: Sven Koenig

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Ph.D. Thesis Defense - Ali Omrani

    Mon, Mar 25, 2024 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Ph.D. Thesis Defense - Ali Omrani
     
    Committee: Morteza Dehghani (Chair),  Xiang Ren, Robin Jia, Payam Piray, and Jeffrey Sorensen 
     
    Title: Countering Problematic Content in Digital Space: Bias Reduction and Dynamic Content Adaptation
     
    Abstract:   Problematic content, such as hate speech, poses a significant challenge to society, leading to discrimination and exclusion while undermining inclusivity and well-being. This thesis proposal outlines my efforts to create adaptable solutions for combating problematic content in digital space through a theory-motivated approach that bridges language technology and social sciences. I will begin by presenting an innovative group-agnostic method for bias mitigation in language models, which is grounded in a deep understanding of stereotyping from social psychology. Subsequently, I will introduce a novel continual learning framework for problematic content detection that captures the ever-evolving nature of this issue. Afterward, I discuss my work that extends this framework to multilingual settings, with a specific emphasis on two key aspects: 1. Harnessing cultural diversity for cross-lingual transfer of offensive language detection and 2. Investigating the challenges posed by disparities in data quality across various languages.Date and Time: March 25th, 2:00 PM - 4:00 PM
    Location:  Room 266, USC Brain and Creativity Institute 605, 3620 McClintock Ave, Los Angeles, CA 90089
     
     

    Location: Dornsife Neuroscience Imaging Center (DNI) - 266

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • CS Colloquium: Xiang Anthony Chen - Catalyzing AI Advances with Human-Centered Interactive Systems

    Tue, Mar 26, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Xiang Anthony Chen, UCLA

    Talk Title: Catalyzing AI Advances with Human-Centered Interactive Systems

    Abstract: Despite the unprecedented advances in AI, there has always been a gap between how well an AI model performs and how such performance can serve humanity. In this seminar, I will describe my past work to close this gap. Specifically, I develop human-centered interactive systems that catalyze advances in AI to achieve three levels of objectives: aligning with human values, assimilating human intents, and augmenting human abilities. Further, I will discuss my ongoing and future research, focused on AI for scientific discovery, AI with Theory of Mind, and AI-mediated human communication.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Xiang ‘Anthony' Chen is an Assistant Professor in UCLA's Department of Electrical & Computer Engineering. He received a Ph.D. in the School of Computer Science at Carnegie Mellon University. Anthony's area of expertise is Human-Computer Interaction (HCI). His research employs human-centered design methods to build systems that catalyze advances in AI to better serve humanity, supported by NSF CAREER Award, ONR YIP Award, Google Research Scholar Award, Intel Rising Star Award, Hellman Fellowship, NSF CRII Award, and Adobe Ph.D. Fellowship. Anthony’s work has resulted in 55+ publications with three best paper awards and three honorable mentions in top-tier HCI conferences.

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CAIS Webinar: Dr. Jessica Ridgway (University of Chicago) - Predictive Analytics for Engagement in HIV Care

    Tue, Mar 26, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dr. Jessica Ridgway, University of Chicago

    Talk Title: Predictive Analytics for Engagement in HIV Care

    Abstract: Engagement in care is essential for the health of people with HIV, but only half of people with HIV in the U.S. receive regular medical care. Dr. Ridgway will discuss her research utilizing machine learning models based on electronic medical record data to predict engagement in care among people with HIV. She has developed machine learning models using structured data as well as natural language processing of unstructured clinical notes. She will discuss challenges and pitfalls in utilizing electronic medical record data for HIV-related predictive modeling, as well as implications for implementation in clinical practice.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium.

    Biography: Jessica Ridgway, MD, MS, is an Associate Professor of Medicine in the Section of Infectious Diseases and Global Health and Director of Medical Informatics at the University of Chicago. She is Director of Predictive Analytics for the Chicago Center for HIV Elimination. Her research focuses on utilizing large electronic medical record databases to understand HIV epidemiology across the continuum of care and implementation of clinical informatics interventions to improve HIV care and prevention.

    Host: USC Center for Artificial Intelligence in Society (CAIS)

    More Info: https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    Location: Zoom only - https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    OutlookiCal
  • PhD Dissertation Defense - Aniruddh Puranic

    Tue, Mar 26, 2024 @ 03:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Dissertation Defense - Aniruddh Puranic   Committee: Jyotirmoy V. Deshmukh (Chair), Gaurav Sukhatme, Stefanos Nikolaidis, and Stephen Tu     Title: Sample-Efficient and Robust Neurosymbolic Learning from Demonstrations     Abstract: Learning-from-demonstrations (LfD) is a popular paradigm to obtain effective robot control policies for complex tasks via reinforcement learning (RL) without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and interpretability in the learned control policies. To address these issues, this thesis develops a neurosymbolic learning framework which is a hybrid method that integrates neural network-based learning with symbolic (e.g., rule, logic, graph) reasoning to leverage the strengths of both approaches. Specifically, this framework uses Signal Temporal Logic (STL) to express high-level robotic tasks and its quantitative semantics to evaluate and rank the quality of demonstrations. Temporal logic-based specifications allow us to create non-Markovian rewards and are also capable of defining interesting causal dependencies between tasks such as sequential task specifications. This dissertation presents the LfD-STL framework that learns from even suboptimal/imperfect demonstrations and STL specifications to infer reward functions; these reward functions can then be used by reinforcement learning algorithms to obtain control policies. Experimental evaluations on several diverse set of environments show that the additional information in the form of formally specified task objectives allows the framework to outperform prior state-of-the-art LfD methods.     Many real-world robotic tasks consist of multiple objectives (specifications), some of which may be inherently competitive, thus prompting the need for deliberate trade-offs. This dissertation then further extends the LfD-STL framework by a developing metric - performance graph - which is a directed graph that utilizes the quality of demonstrations to provide intuitive explanations about the performance and trade-offs of demonstrated behaviors. This performance graph also offers concise insights into the learning process of the RL agent, thereby enhancing interpretability, as corroborated by a user study. Finally, the thesis discusses how the performance graphs can be used as an optimization objective to guide RL agents to potentially learn policies that perform better than the (imperfect) demonstrators via apprenticeship learning (AL). The theoretical machinery developed for the AL-STL framework examines the guarantees on safety and performance of RL agents.   https://usc.zoom.us/j/98964159897?pwd=a2ljaGNEOGcvMkl1WU9yZENPc0M1dz09

    Location: Ronald Tutor Hall of Engineering (RTH) - 306

    Audiences: Everyone Is Invited

    Contact: Aniruddh Puranic

    Event Link: https://usc.zoom.us/j/98964159897?pwd=a2ljaGNEOGcvMkl1WU9yZENPc0M1dz09

    OutlookiCal
  • CS Colloquium: Paul Liang - Foundations of Multisensory Artificial Intelligence

    Wed, Mar 27, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Paul Liang, CMU

    Talk Title: Foundations of Multisensory Artificial Intelligence

    Abstract: Building multisensory AI systems that learn from multiple sensory inputs such as text, speech, video, real-world sensors, wearable devices, and medical data holds great promise for impact in many scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. In this talk, I will discuss my research on the machine learning principles of multisensory intelligence, as well as practical methods for building multisensory foundation models over many modalities and tasks. In the first half, I will present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets and design principled approaches to learn these interactions. In the second part, I will present my work in cross-modal attention and multimodal transformer architectures that now underpin many of today’s multimodal foundation models. Finally, I will discuss our collaborative efforts in scaling AI to many modalities and tasks for real-world impact on mental health, cancer prognosis, and robot control.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Paul Liang is a Ph.D. student in Machine Learning at CMU, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. He studies the machine learning foundations of multisensory intelligence to design practical AI systems that integrate, learn from, and interact with a diverse range of real-world sensory modalities. His work has been applied in affective computing, mental health, pathology, and robotics. He is a recipient of the Siebel Scholars Award, Waibel Presidential Fellowship, Facebook PhD Fellowship, Center for ML and Health Fellowship, Rising Stars in Data Science, and 3 best paper/honorable mention awards at ICMI and NeurIPS workshops. Outside of research, he received the Alan J. Perlis Graduate Student Teaching Award for instructing courses on multimodal ML and advising students around the world in directed research.

    Host: Willie Neiswanger / Xiang Ren

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Computer Science General Faculty Meeting

    Wed, Mar 27, 2024 @ 12:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Bi-Weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Location: Ronald Tutor Hall of Engineering (RTH) - 526

    Audiences: Invited Faculty Only

    Contact: Assistant to CS Chair

    OutlookiCal
  • PhD Thesis Proposal- Xin Qin

    Wed, Mar 27, 2024 @ 12:45 PM - 01:45 PM

    Thomas Lord Department of Computer Science

    Student Activity


    PhD Thesis Proposal- Xin Qin
    Title: Data-driven and Logic-based Analysis of Learning-enabled Cyber-Physical Systems
    Committee: Jyotirmoy Deshmukh, Chao Wang, Souti Chattopadhyay, Yan Liu and Paul Bogdan
     

    Abstract: Rigorous analysis of cyber-physical systems (CPS) is becoming increasingly important, especially for safety-critical applications that use learning-enabled components. In this proposal, we will discuss various pieces of a broad framework that enable scalable reasoning techniques tuned to modern software design practices in autonomous CPS applications. The proposal will center around three main pillars: (1) Statistical verification techniques to give probabilistic guarantees on system correctness; here, we treat the underlying CPS application as a black-box and use distribution-free and model-free techniques to provide probabilistic correctness guarantees. (2) Predictive monitoring techniques that use physics-based or data-driven models of the system to continuously monitor logic-based requirements of systems operating in highly uncertain environments; this allows us to design runtime mitigation approaches to take corrective actions before a safety violation can occur. (3) Robust testing for CPS using reinforcement learning. We train an agent to produce a policy to initiate unsafe behaviors in similar target systems without the need for retraining, thereby allowing for the elicitation of faulty behaviors across various systems.  The proposal hopes to demonstrate the scalability of our approaches on various realistic models of autonomous systems.

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 349

    Audiences: Everyone Is Invited

    Contact: Xin Qin

    OutlookiCal
  • CS Colloquium: Teodora Baluta - New Algorithmic Tools for Rigorous Machine Learning Security Analysis

    Wed, Mar 27, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Teodora Baluta, National University of Singapore

    Talk Title: New Algorithmic Tools for Rigorous Machine Learning Security Analysis

    Abstract: Machine learning security is an emerging area with many open questions lacking systematic analysis. In this talk, I will present three new algorithmic tools to address this gap: (1) algebraic proofs; (2) causal reasoning; and (3) sound statistical verification. Algebraic proofs provide the first conceptual mechanism to resolve intellectual property disputes over training data. I show that stochastic gradient descent, the de-facto training procedure for modern neural networks, is a collision-resistant computation under precise definitions. These results open up connections to lattices, which are mathematical tools used for cryptography presently. I will also briefly mention my efforts to analyze causes of empirical privacy attacks and defenses using causal models, and to devise statistical verification procedures with ‘probably approximately correct’ (PAC)-style soundness guarantees.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Teodora Baluta is a Ph.D. candidate in Computer Science at the National University of Singapore. She enjoys working on security problems that are both algorithmic in nature and practically relevant. She is one of the EECS Rising Stars 2023, a Google PhD Fellow, a Dean’s Graduate Research Excellence Award recipient and a President’s Graduate Fellowship recipient at NUS. She interned at Google Brain working in the Learning for Code team. Her works are published in security (CCS, NDSS), programming languages/verification conferences (OOPSLA, SAT), and software engineering conferences (ICSE, ESEC/FSE). More details are available on her webpage: https://urldefense.com/v3/__https://teobaluta.github.io/__;!!LIr3w8kk_Xxm!pCgCXC327otABpiCTruPDSq7pyOXJEWhQ5X0UekIkZhAzt8Q0u0y5QtnemfzYURw7fop1LHm8tR_SY5JCA$ .

    Host: Mukund Raghothaman

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Yangsibo Huang - Auditing Policy Compliance in Machine Learning Systems

    Thu, Mar 28, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Yangsibo Huang, Princeton University

    Talk Title: Auditing Policy Compliance in Machine Learning Systems

    Abstract: As the capabilities of large-scale machine learning models expand, so too do their associated risks. There is an increasing demand for policies that mandate these models to be safe, privacy-preserving, and transparent regarding data usage. However, there are significant challenges with developing enforceable policies and translating the qualitative mandates into quantitative, auditable, and actionable criteria. In this talk, I will present my work on addressing the challenges.  I will first share my exploration of privacy leakage and mitigation strategies in distributed training. Then, I will explore strategies for auditing compliance with data transparency regulations. I will also examine methods to quantify and assess the fragility of safety alignments in Large Language Models. Finally, I will discuss my plans for future research directions, including collaboration with policy researchers and policymakers.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Yangsibo Huang is a Ph.D. candidate and Wallace Memorial Fellow at Princeton University.  She has been doing research at the intersection of machine learning, systems, and policy, with a focus on auditing and improving machine learning systems’ compliance with policies, from the perspectives of privacy, safety, and data usage. She interned at Google AI, Meta AI, and Harvard Medical School and was named an EECS rising star in 2023.   

    Host: Yue Zhao

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Dissertation Defense - Chuizheng Meng

    Thu, Mar 28, 2024 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee Members: Yan Liu (Chair), Willie Neiswanger, and Assad A Oberai (external member)
     
    Title: Trustworthy Spatiotemporal Prediction Models
     
    Abstract: With the great success of data-driven machine learning methods, concerns with the trustworthiness of machine learning models have been emerging in recent years. From the modeling perspective, the lack of trustworthiness amplifies the effect of insufficient training data. Purely data-driven models without constraints from domain knowledge tend to suffer from over-fitting and losing the generalizability of unseen data. Meanwhile, concerns with data privacy further obstruct the availability of data from more providers. On the application side, the absence of trustworthiness hinders the application of data-driven methods in domains such as spatiotemporal forecasting, which involves data from critical applications including traffic, climate, and energy. My dissertation constructs spatiotemporal prediction models with enhanced trustworthiness from both the model and the data aspects. For model trustworthiness, the dissertation focuses on improving the generalizability of models via the integration of physics knowledge. For data trustworthiness, the proposal proposes a spatiotemporal forecasting model in the federated learning context, where data in a network of nodes is generated locally on each node and remains decentralized. Furthermore, the dissertation amalgamates the trustworthiness from both aspects and combines the generalizability of knowledge-informed models with the privacy preservation of federated learning for spatiotemporal modeling.

    Location: Waite Phillips Hall Of Education (WPH) - B26

    Audiences: Everyone Is Invited

    Contact: Chuizheng Meng

    OutlookiCal
  • CS Colloquium: Ram Sundara Raman - Global Investigation of Network Connection Tampering

    Thu, Mar 28, 2024 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Ram Sundara Raman, University of Michigan

    Talk Title: Global Investigation of Network Connection Tampering

    Abstract: As the Internet's user base and criticality of online services continue to expand daily, powerful adversaries like Internet censors are increasingly monitoring and restricting Internet traffic. These adversaries, powered by advanced network technology, perform large-scale connection tampering attacks seeking to prevent users from accessing specific online content, compromising Internet availability and integrity. In recent years, we have witnessed recurring censorship events affecting Internet users globally, with far-reaching social, financial, and psychological consequences, making them important to study. However, characterizing tampering attacks at the global scale is an extremely challenging problem, given intentionally opaque practices by adversaries, varying tampering mechanisms and policies across networks, evolving environments, sparse ground truth, and safety risks in collecting data. In this talk, I will describe my research on building empirical methods to characterize connection tampering globally and investigate the network technology enabling tampering. First, I will describe a modular design for the Censored Planet Observatory that enables it to remotely and sustainably measure Internet censorship longitudinally in more than 200 countries. I will introduce time series analysis methods to detect key censorship events in longitudinal Censored Planet data, and reveal global censorship trends. I will also briefly describe methods to detect connection tampering using purely passive data. Next, I will introduce novel network measurement methods for locating and examining network devices that perform censorship. Finally, I will describe exciting ongoing and future research directions, such as building intelligent measurement platforms.    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Ram Sundara Raman is a PhD candidate in Computer Science and Engineering at the University of Michigan, advised by Prof. Roya Ensafi. His research lies in the intersection of computer security, privacy, and networking, employing empirical methods to study large-scale Internet attacks. Ram has been recognized as a Rising Star at the Workshop on Free and Open Communications on the Internet (FOCI), and was awarded the IRTF Applied Networking Research Prize in 2023. His work has helped produce one of the biggest active censorship measurement platforms, the Censored Planet Observatory, and has helped prevent large-scale attacks on end-to-end encryption.

    Host: Jyo Deshmukh

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal