Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Conferences, Lectures, & Seminars
Events for March

  • CS Colloquium: Emily Tseng (Cornell University) - Digital Safety and Security for Survivors of Technology-Mediated Harms

    Mon, Mar 04, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Emily Tseng, Cornell University

    Talk Title: Digital Safety and Security for Survivors of Technology-Mediated Harms

    Series: Computer Science Colloquium

    Abstract: Platforms, devices, and algorithms are increasingly weaponized to control and harass the most vulnerable among us. Some of these harms occur at the individual and interpersonal level: for example, abusers in intimate partner violence (IPV) use smartphones and social media to surveil and stalk their victims. Others are more subtle, at the level of social structure: for example, in organizations, workplace technologies can inadvertently scaffold exploitative labor practices. This talk will discuss my research (1) investigating these harms via online measurement studies, (2) building interventions to directly assist survivors with their security and privacy; and (3) instrumenting these interventions as observatories, to enable scientific research into new types of harms as attackers and technologies evolve. I will close by sharing my vision for centering inclusion and equity in digital safety, security and privacy, towards brighter technological futures for us all.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Emily Tseng is a PhD candidate in Information Science at Cornell University. Her research develops the systems, interventions, and design principles we need to make digital technology safe and affirming for everyone. Emily’s work has been published at top-tier venues in human-computer interaction (ACM CHI, CSCW) and computer security and privacy (USENIX Security, IEEE Oakland). For 5 years, she has worked as a researcher-practitioner with the Clinic to End Tech Abuse, where her work has enabled specialized security services for over 500 survivors of intimate partner violence (IPV). Emily is the recipient of a Microsoft Research PhD Fellowship, Rising Stars in EECS, Best Paper Awards at CHI, CSCW, and USENIX Security, and third place in the Internet Defense Prize. She has interned at Google and with the Social Media Collective at Microsoft Research. She holds a B.A. from Princeton University.

    Host: Jesse Thomason

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Angelina Wang (Princeton University) - Operationalizing Responsible Machine Learning: From Equality Towards Equity

    Tue, Mar 05, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Angelina Wang, Princeton University

    Talk Title: Operationalizing Responsible Machine Learning: From Equality Towards Equity

    Abstract: With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work. I will discuss three research directions that show how, despite the technically convenient approach of considering equality acontextually, a stronger engagement with societal context allows us to operationalize a more equitable formulation. First, I will introduce a dataset tool that we developed to analyze complex, socially-grounded forms of visual bias. Then, I will provide empirical evidence to support how we should incorporate societal context in bringing intersectionality into machine learning. Finally, I will discuss how in the excitement of using LLMs for tasks like human participant replacement, we have neglected to consider the importance of human positionality. Overall, I will explore how we can expand a narrow focus on equality in responsible machine learning to encompass a broader understanding of equity that substantively engages with societal context.  
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Angelina Wang is a Computer Science PhD student at Princeton University advised by Olga Russakovsky. Her research is in the area of machine learning fairness and algorithmic bias. She has been recognized by the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship. She has published in top machine learning (ICML, AAAI), computer vision (ICCV, IJCV), interdisciplinary (Big Data & Society), and responsible computing (FAccT, JRC) venues, including spotlight and oral presentations. Previously, she has interned with Microsoft Research and Arthur AI, and received a B.S. in Electrical Engineering and Computer Science from UC Berkeley.

    Host: Bistra Dilkina

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Chang Xiao (Adobe Research) - Augmented Interaction Between Physical and Digital Realm

    Wed, Mar 06, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Chang Xiao, Adobe Research

    Talk Title: Augmented Interaction Between Physical and Digital Realm

    Series: Computer Science Colloquium

    Abstract: Today's computing devices, including mobile phones, wearable devices, and VR/AR headsets, have become increasingly powerful and accessible to almost everyone. They offer a direct and immersive interaction with digital worlds. But what if we could use these devices to access interactive physical worlds as well, expanding our interaction space and unlocking greater interactive potential? In this talk, I will discuss our work on integrating both physical and digital systems to create a new computing environment. Leveraging techniques from AI/ML, Computer Vision, and Computational Design, we propose several interactive systems and sensing techniques that provide users with unified, low-cost, tangible, and intuitive experiences. These approaches unlock the potential of using the physical environment as computer interfaces in the era of Extended Reality (XR) and spatial computing, bridging the gap between physical and digital spaces.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Chang Xiao is currently a Research Scientist at Adobe Research. He obtained his PhD from Columbia University in 2021. His broad interests lie at the intersection of HCI, AI/ML, and AR/VR, with a special focus on leveraging AI/ML to develop novel interaction and sensing techniques. His work has been published in a wide spectrum of top computer science venues, including CHI, UIST, SIGGRAPH, NeurIPS, CVPR, and ICLR. His research has gained impact beyond academia, having been successfully integrated into multiple Adobe products and receiving widespread attention, including media interviews and coverage by CNN, Adweek, CACM, and IEEE Spectrum. During his PhD studies, he received the Snap Research Fellowship and the Cheung-Kong Innovation Doctoral Fellowship.

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Ben Lengerich (MIT) - Contextualized learning for adaptive yet persistent AI in biomedicine

    Thu, Mar 07, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Ben Lengerich, MIT

    Talk Title: Contextualized learning for adaptive yet persistent AI in biomedicine

    Series: Computer Science Colloquium

    Abstract: Machine learning models often exhibit diminished generalizability when applied across diverse biomedical contexts (e.g., across health institutions), leading to a significant discrepancy between expected and actual performance. To address this challenge, this presentation introduces "contextualized learning", a meta-learning paradigm designed to enhance model adaptability by learning meta-relationships between dataset context and statistical parameters. Using network inference as an illustrative example, I will show how contextualized learning estimates context-specific graphical models, offering insights such as personalized gene expression analysis for cancer subtyping. The talk will also discuss trends towards “contextualized understanding”, bridging statistical and foundation models to standardize interpretability. The primary aim is to illustrate how contextualized learning and understanding contribute to creating learning systems that are both adaptive and persistent, facilitating cross-context information sharing and detailed analysis.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Ben Lengerich is a Postdoctoral Associate and Alana Fellow at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Broad Institute of MIT and Harvard, where he is advised by Manolis Kellis. His research in machine learning and computational biology emphasizes the use of context-adaptive models to understand complex diseases and advance precision medicine. Through his work, Ben aims to bridge the gap between data-driven insights and actionable medical interventions. He holds a PhD in Computer Science and MS in Machine Learning from Carnegie Mellon University, where he was advised by Eric Xing. His work has been recognized with spotlight presentations at conferences including NeurIPS, ISMB, AMIA, and SMFM, financial support from the Alana Foundation, and recognition as a "Rising Star in Data Science” by the University of Chicago and UC San Diego.

    Host: Willie Neiswanger

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: TBA

    Mon, Mar 18, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: TBA, TBA

    Talk Title: TBA

    Series: Computer Science Colloquium

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Machine Learning Center Seminar: Lily Weng (UC San Diego) - Towards Interpretable Deep Learning

    Mon, Mar 18, 2024 @ 12:00 PM - 01:30 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Lily Weng, UC San Diego

    Talk Title: Towards Interpretable Deep Learning

    Series: Machine Learning Center Seminar Series

    Abstract: Deep neural networks (DNNs) have achieved unprecedented success across many scientific and engineering fields in the last decades. Despite its empirical success, however, they are notoriously black-box models that are difficult to understand their decision process. Lacking interpretability is one critical issue that may seriously hinder the deployment of DNNs in high-stake applications, which need interpretability to trust the prediction, to understand potential failures, and to be able to mitigate harms and eliminate biases in the model.     
     
    In this talk, I'll share some exciting results in my lab on advancing explainable AI and interpretable machine learning. Specifically, I will show how we could bring interpretability into deep learning by leveraging recent advances in multi-modal models. I'll present two recent works [1,2] in our group on demystifying neural networks and interpretability-guided neural network design, which are the important first steps to enable Trustworthy AI and Trustworthy Machine Learning. I will also briefly overview our other recent efforts on Trustworthy Machine Learning and automated explanations for LLMs [3].     
     
    [1] Oikarinen and Weng, CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks, ICLR 23 (spotlight)
    [2] Oikarinen, Das, Nguyen and Weng, Label-Free Concept Bottleneck Models, ICLR 23
    [3] Lee, Oikarinen etal, The Importance of Prompt Tuning for Automated Neuron Explanations, NeurIPS 23 ATTRIB workshop

    Biography: Lily Weng is an Assistant Professor in the Halicioglu Data Science Institute at UC San Diego. She received her PhD in Electrical Engineering and Computer Sciences (EECS) from MIT in August 2020, and her Bachelor and Master degree both in Electrical Engineering at National Taiwan University. Prior to UCSD, she spent 1 year in MIT-IBM Watson AI Lab and several research internships in Google DeepMind, IBM Research and Mitsubishi Electric Research Lab. Her research interest is in machine learning and deep learning, with primary focus on trustworthy AI. Her vision is to make the next generation AI systems and deep learning algorithms more robust, reliable, explainable, trustworthy and safer. For more details, please see https://lilywenglab.github.io/.

    Host: Yan Liu

    Location: Ronald Tutor Hall of Engineering (RTH) - 306

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • CS Colloquium: Sherry Yang - Decision Making with Internet-Scale Knowledge

    Tue, Mar 19, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Sherry Yang, UC Berkeley

    Talk Title: Decision Making with Internet-Scale Knowledge

    Abstract: Machine learning models pretrained on internet data have acquired broad knowledge about the world but struggle to solve complex tasks that require extended reasoning and planning. Sequential decision making, on the other hand, has empowered AlphaGo’s superhuman performance, but lacks visual, language, and physical knowledge about the world. In this talk, I will present my research towards enabling decision making with internet-scale knowledge. First, I will illustrate how language models and video generation are unified interfaces that can integrate internet knowledge and represent diverse tasks, enabling the creation of a generative simulator to support real-world decision-making. Second, I will discuss my work on designing decision making algorithms that can take advantage of generative language and video models as agents and environments. Combining pretrained models with decision making algorithms can effectively enable a wide range of applications such as developing chatbots, learning robot policies, and discovering novel materials.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Sherry is a final year PhD student at UC Berkeley advised by Pieter Abbeel and a senior research scientist at Google DeepMind. Her research aims to develop machine learning models with internet-scale knowledge to make better-than-human decisions. To this end, she has developed techniques for generative modeling and representation learning from large-scale vision, language, and structured data, coupled with developing algorithms for sequential decision making such as imitation learning, planning, and reinforcement learning. Sherry initiated and led the Foundation Models for Decision Making workshop at NeurIPS 2022 and 2023, bringing together research communities in vision, language, planning, and reinforcement learning to solve complex decision making tasks at scale.  Before her current role, Sherry received her Bachelor’s degree and Master’s degree from MIT advised by Patrick Winston and Julian Shun.

    Host: Dani Yogatama

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Mengyuan Li - Confidential Computing and Trusted Execution Environment: Challenges, Opportunities, and the Future

    Wed, Mar 20, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Mengyuan Li, MIT

    Talk Title: Confidential Computing and Trusted Execution Environment: Challenges, Opportunities, and the Future

    Abstract: Confidential Computing, or Trusted Execution Environment (TEE), represents a cutting-edge design in server-grade CPUs. This technology acts as a protective shield for cloud tasks, safeguarding the confidentiality and integrity of cloud workloads against a range of threats, including attacks from privileged software, physical attackers, and untrustworthy hypervisors. As the demand for secure private data handling continues to rise, the adoption of Confidential Computing has become widespread across various industries. Evidence of this includes the adoption of TEE in server-grade CPUs from major vendors like Intel, AMD, and ARM. Furthermore, leading cloud service providers, such as AWS, Google Cloud, Microsoft Azure, and IBM Cloud, now offer commercial Confidential Computing services.   In this talk, I will outline my contributions to the study of complex, heterogeneous Confidential Computing systems. I will share my insights into two real-world vulnerabilities we uncovered within commercial Confidential Computing systems, along with our joint efforts with CPU manufacturers to address these issues in the latest server-grade CPUs. At the hardware design level, I will discuss a novel ciphertext side-channel attack targeting hardware-accelerated memory encryption, which is a crucial hardware feature to protect the memory of cloud workloads. Moving to the software system design level, I will illustrate how inadequately designed TEE operating systems can pose a threat to the security of Confidential VMs. Finally, I will outline my ongoing efforts and future directions in enhancing the security and effectiveness of Confidential Computing and my research vision towards building secure and performant hardware systems.        This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Mengyuan Li is a postdoctoral researcher at CSAIL, MIT, under the guidance of Prof. Mengjia Yan. His research focuses on bringing security and trust to hardware systems, with a recent concentration in Confidential Cloud Computing and Trusted Execution Environments. To this end, he has identified real-world hardware vulnerabilities in commodity CPUs, which have been acknowledged by manufacturers through hardware CVEs and several security bulletins. Additionally, he has collaborated closely with industry teams such as AMD, Intel, WolfSSL, and Alibaba Cloud to develop mitigations and design commercial trustworthy hardware systems. His research findings have been published in top security and privacy venues, including S&P, Usenix Security, and CCS, and have been recognized by the CCS 2021 Best Paper Runner-up Award. Before MIT, Mengyuan earned his Ph.D. in Computer Science and Engineering from The Ohio State University (OSU) in 2022.

    Host: Seo Jin Park

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Andrew Ilyas - Making machine learning predictably reliable

    Thu, Mar 21, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Andrew Ilyas, MIT

    Talk Title: Making machine learning predictably reliable

    Abstract: Despite ML models' impressive performance, training and deploying them is currently a somewhat messy endeavor. But does it have to be? In this talk, I overview my work on making ML “predictably reliable”---enabling developers to know when their models will work, when they will fail, and why.To begin, we use a case study of adversarial inputs to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a line of work that aims to develop a precise understanding of the ML pipeline, combining statistical tools with large-scale experiments to characterize the role of each individual design choice: from how to collect data, to what dataset to train on, to what learning algorithm to use.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Andrew Ilyas is a PhD student in Computer Science at MIT, where he is advised by Aleksander Madry and Constantinos Daskalakis. His research aims to improve the reliability and predictability of machine learning systems. He was previously supported by an Open Philanthropy AI Fellowship.

    Host: Vatsal Sharan

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • USC Symposium on Frontiers of Generative AI Models in Science and Society

    Mon, Mar 25, 2024 @ 08:30 AM - 06:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Various, USC Machine Learning Center

    Talk Title: USC Symposium on Frontiers of Generative AI Models in Science and Society

    Abstract: The USC Machine Learning Center and Computer Science Department is excited to host the syposium on "Frontiers of Generative AI Models in Science and Society". Experts in generative AI models will discuss recent progresses and their applications in science and soceity.    
     
    Keynote Speakers: Alessandro Vespignani (Northeastern University), Nitesh Chawla (Notre Dame), Yizhou Sun (UCLA), & Jian Ma (CMU)    
     
    Spotlight Speakers: Jieyu Zhao, Robin Jia, Yue Wang, Vatsal Sharan, & Ruishan Liu (USC Thomas Lord Department of Computer Science)

    Host: USC Machine Learning Center

    More Info: https://www.eventbrite.com/e/usc-symposium-on-frontiers-of-generative-ai-models-in-science-and-society-tickets-860269668737?aff=oddtdtcreator

    Location: Michelson Center for Convergent Bioscience (MCB) - 101

    Audiences: Everyone Is Invited

    Contact: Thomas Lord Department of Computer Science

    Event Link: https://www.eventbrite.com/e/usc-symposium-on-frontiers-of-generative-ai-models-in-science-and-society-tickets-860269668737?aff=oddtdtcreator

    OutlookiCal
  • CS Colloquium: Junzhe Zhang - Towards Causal Reinforcement Learning

    Mon, Mar 25, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Junzhe Zhang, Columbia University

    Talk Title: Towards Causal Reinforcement Learning

    Abstract: Causal inference provides a set of principles and tools that allows one to combine data and knowledge about an environment to reason with questions of a counterfactual nature - i.e., what would have happened if the reality had been different - even when no data of this unrealized reality is currently available. Reinforcement learning provides a collection of methods that allows the agent to reason about optimal decision-making under uncertainty by trial and error - i.e., what would the consequences (e.g., subsequent rewards, states) be had the action been different? While these two disciplines have evolved independently and with virtually no interaction, they operate over various aspects of the same building block, i.e., counterfactual reasoning, making them umbilically connected.   This talk will present a unified theoretical framework, called causal reinforcement learning, that explores the nuanced interplays between causal inference and reinforcement learning. I will discuss a recent breakthrough in partial identification that allows one to infer unknown causal effects from a combination of model assumptions and available data. Delving deeper, I will then demonstrate how this method could be applicable to address some practical challenges in classic reinforcement learning tasks, including robust off-policy evaluation from confounded observations and accelerating online learning with offline data.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Junzhe Zhang is a postdoctoral research scientist in the Causal AI lab at Columbia University. He obtained his doctoral degree in Computer Science at Columbia University, advised by Elias Bareinboim. His research centers on causal inference theory and its applications in reinforcement learning, algorithmic fairness, and explainability. His works have been selected for oral presentations in top refereed venues such as NeurIPS.

    Host: Sven Koenig

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Xiang Anthony Chen - Catalyzing AI Advances with Human-Centered Interactive Systems

    Tue, Mar 26, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Xiang Anthony Chen, UCLA

    Talk Title: Catalyzing AI Advances with Human-Centered Interactive Systems

    Abstract: Despite the unprecedented advances in AI, there has always been a gap between how well an AI model performs and how such performance can serve humanity. In this seminar, I will describe my past work to close this gap. Specifically, I develop human-centered interactive systems that catalyze advances in AI to achieve three levels of objectives: aligning with human values, assimilating human intents, and augmenting human abilities. Further, I will discuss my ongoing and future research, focused on AI for scientific discovery, AI with Theory of Mind, and AI-mediated human communication.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Xiang ‘Anthony' Chen is an Assistant Professor in UCLA's Department of Electrical & Computer Engineering. He received a Ph.D. in the School of Computer Science at Carnegie Mellon University. Anthony's area of expertise is Human-Computer Interaction (HCI). His research employs human-centered design methods to build systems that catalyze advances in AI to better serve humanity, supported by NSF CAREER Award, ONR YIP Award, Google Research Scholar Award, Intel Rising Star Award, Hellman Fellowship, NSF CRII Award, and Adobe Ph.D. Fellowship. Anthony’s work has resulted in 55+ publications with three best paper awards and three honorable mentions in top-tier HCI conferences.

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CAIS Webinar: Dr. Jessica Ridgway (University of Chicago) - Predictive Analytics for Engagement in HIV Care

    Tue, Mar 26, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dr. Jessica Ridgway, University of Chicago

    Talk Title: Predictive Analytics for Engagement in HIV Care

    Abstract: Engagement in care is essential for the health of people with HIV, but only half of people with HIV in the U.S. receive regular medical care. Dr. Ridgway will discuss her research utilizing machine learning models based on electronic medical record data to predict engagement in care among people with HIV. She has developed machine learning models using structured data as well as natural language processing of unstructured clinical notes. She will discuss challenges and pitfalls in utilizing electronic medical record data for HIV-related predictive modeling, as well as implications for implementation in clinical practice.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium.

    Biography: Jessica Ridgway, MD, MS, is an Associate Professor of Medicine in the Section of Infectious Diseases and Global Health and Director of Medical Informatics at the University of Chicago. She is Director of Predictive Analytics for the Chicago Center for HIV Elimination. Her research focuses on utilizing large electronic medical record databases to understand HIV epidemiology across the continuum of care and implementation of clinical informatics interventions to improve HIV care and prevention.

    Host: USC Center for Artificial Intelligence in Society (CAIS)

    More Info: https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    Location: Zoom only - https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    OutlookiCal
  • CS Colloquium: Paul Liang - Foundations of Multisensory Artificial Intelligence

    Wed, Mar 27, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Paul Liang, CMU

    Talk Title: Foundations of Multisensory Artificial Intelligence

    Abstract: Building multisensory AI systems that learn from multiple sensory inputs such as text, speech, video, real-world sensors, wearable devices, and medical data holds great promise for impact in many scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. In this talk, I will discuss my research on the machine learning principles of multisensory intelligence, as well as practical methods for building multisensory foundation models over many modalities and tasks. In the first half, I will present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets and design principled approaches to learn these interactions. In the second part, I will present my work in cross-modal attention and multimodal transformer architectures that now underpin many of today’s multimodal foundation models. Finally, I will discuss our collaborative efforts in scaling AI to many modalities and tasks for real-world impact on mental health, cancer prognosis, and robot control.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Paul Liang is a Ph.D. student in Machine Learning at CMU, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. He studies the machine learning foundations of multisensory intelligence to design practical AI systems that integrate, learn from, and interact with a diverse range of real-world sensory modalities. His work has been applied in affective computing, mental health, pathology, and robotics. He is a recipient of the Siebel Scholars Award, Waibel Presidential Fellowship, Facebook PhD Fellowship, Center for ML and Health Fellowship, Rising Stars in Data Science, and 3 best paper/honorable mention awards at ICMI and NeurIPS workshops. Outside of research, he received the Alan J. Perlis Graduate Student Teaching Award for instructing courses on multimodal ML and advising students around the world in directed research.

    Host: Willie Neiswanger / Xiang Ren

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Teodora Baluta - New Algorithmic Tools for Rigorous Machine Learning Security Analysis

    Wed, Mar 27, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Teodora Baluta, National University of Singapore

    Talk Title: New Algorithmic Tools for Rigorous Machine Learning Security Analysis

    Abstract: Machine learning security is an emerging area with many open questions lacking systematic analysis. In this talk, I will present three new algorithmic tools to address this gap: (1) algebraic proofs; (2) causal reasoning; and (3) sound statistical verification. Algebraic proofs provide the first conceptual mechanism to resolve intellectual property disputes over training data. I show that stochastic gradient descent, the de-facto training procedure for modern neural networks, is a collision-resistant computation under precise definitions. These results open up connections to lattices, which are mathematical tools used for cryptography presently. I will also briefly mention my efforts to analyze causes of empirical privacy attacks and defenses using causal models, and to devise statistical verification procedures with ‘probably approximately correct’ (PAC)-style soundness guarantees.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Teodora Baluta is a Ph.D. candidate in Computer Science at the National University of Singapore. She enjoys working on security problems that are both algorithmic in nature and practically relevant. She is one of the EECS Rising Stars 2023, a Google PhD Fellow, a Dean’s Graduate Research Excellence Award recipient and a President’s Graduate Fellowship recipient at NUS. She interned at Google Brain working in the Learning for Code team. Her works are published in security (CCS, NDSS), programming languages/verification conferences (OOPSLA, SAT), and software engineering conferences (ICSE, ESEC/FSE). More details are available on her webpage: https://urldefense.com/v3/__https://teobaluta.github.io/__;!!LIr3w8kk_Xxm!pCgCXC327otABpiCTruPDSq7pyOXJEWhQ5X0UekIkZhAzt8Q0u0y5QtnemfzYURw7fop1LHm8tR_SY5JCA$ .

    Host: Mukund Raghothaman

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Yangsibo Huang - Auditing Policy Compliance in Machine Learning Systems

    Thu, Mar 28, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Yangsibo Huang, Princeton University

    Talk Title: Auditing Policy Compliance in Machine Learning Systems

    Abstract: As the capabilities of large-scale machine learning models expand, so too do their associated risks. There is an increasing demand for policies that mandate these models to be safe, privacy-preserving, and transparent regarding data usage. However, there are significant challenges with developing enforceable policies and translating the qualitative mandates into quantitative, auditable, and actionable criteria. In this talk, I will present my work on addressing the challenges.  I will first share my exploration of privacy leakage and mitigation strategies in distributed training. Then, I will explore strategies for auditing compliance with data transparency regulations. I will also examine methods to quantify and assess the fragility of safety alignments in Large Language Models. Finally, I will discuss my plans for future research directions, including collaboration with policy researchers and policymakers.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Yangsibo Huang is a Ph.D. candidate and Wallace Memorial Fellow at Princeton University.  She has been doing research at the intersection of machine learning, systems, and policy, with a focus on auditing and improving machine learning systems’ compliance with policies, from the perspectives of privacy, safety, and data usage. She interned at Google AI, Meta AI, and Harvard Medical School and was named an EECS rising star in 2023.   

    Host: Yue Zhao

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Ram Sundara Raman - Global Investigation of Network Connection Tampering

    Thu, Mar 28, 2024 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Ram Sundara Raman, University of Michigan

    Talk Title: Global Investigation of Network Connection Tampering

    Abstract: As the Internet's user base and criticality of online services continue to expand daily, powerful adversaries like Internet censors are increasingly monitoring and restricting Internet traffic. These adversaries, powered by advanced network technology, perform large-scale connection tampering attacks seeking to prevent users from accessing specific online content, compromising Internet availability and integrity. In recent years, we have witnessed recurring censorship events affecting Internet users globally, with far-reaching social, financial, and psychological consequences, making them important to study. However, characterizing tampering attacks at the global scale is an extremely challenging problem, given intentionally opaque practices by adversaries, varying tampering mechanisms and policies across networks, evolving environments, sparse ground truth, and safety risks in collecting data. In this talk, I will describe my research on building empirical methods to characterize connection tampering globally and investigate the network technology enabling tampering. First, I will describe a modular design for the Censored Planet Observatory that enables it to remotely and sustainably measure Internet censorship longitudinally in more than 200 countries. I will introduce time series analysis methods to detect key censorship events in longitudinal Censored Planet data, and reveal global censorship trends. I will also briefly describe methods to detect connection tampering using purely passive data. Next, I will introduce novel network measurement methods for locating and examining network devices that perform censorship. Finally, I will describe exciting ongoing and future research directions, such as building intelligent measurement platforms.    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Ram Sundara Raman is a PhD candidate in Computer Science and Engineering at the University of Michigan, advised by Prof. Roya Ensafi. His research lies in the intersection of computer security, privacy, and networking, employing empirical methods to study large-scale Internet attacks. Ram has been recognized as a Rising Star at the Workshop on Free and Open Communications on the Internet (FOCI), and was awarded the IRTF Applied Networking Research Prize in 2023. His work has helped produce one of the biggest active censorship measurement platforms, the Censored Planet Observatory, and has helped prevent large-scale attacks on end-to-end encryption.

    Host: Jyo Deshmukh

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal