Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Events for March

  • CS Colloquium: Emily Tseng (Cornell University) - Digital Safety and Security for Survivors of Technology-Mediated Harms

    Mon, Mar 04, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Emily Tseng, Cornell University

    Talk Title: Digital Safety and Security for Survivors of Technology-Mediated Harms

    Series: Computer Science Colloquium

    Abstract: Platforms, devices, and algorithms are increasingly weaponized to control and harass the most vulnerable among us. Some of these harms occur at the individual and interpersonal level: for example, abusers in intimate partner violence (IPV) use smartphones and social media to surveil and stalk their victims. Others are more subtle, at the level of social structure: for example, in organizations, workplace technologies can inadvertently scaffold exploitative labor practices. This talk will discuss my research (1) investigating these harms via online measurement studies, (2) building interventions to directly assist survivors with their security and privacy; and (3) instrumenting these interventions as observatories, to enable scientific research into new types of harms as attackers and technologies evolve. I will close by sharing my vision for centering inclusion and equity in digital safety, security and privacy, towards brighter technological futures for us all.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Emily Tseng is a PhD candidate in Information Science at Cornell University. Her research develops the systems, interventions, and design principles we need to make digital technology safe and affirming for everyone. Emily’s work has been published at top-tier venues in human-computer interaction (ACM CHI, CSCW) and computer security and privacy (USENIX Security, IEEE Oakland). For 5 years, she has worked as a researcher-practitioner with the Clinic to End Tech Abuse, where her work has enabled specialized security services for over 500 survivors of intimate partner violence (IPV). Emily is the recipient of a Microsoft Research PhD Fellowship, Rising Stars in EECS, Best Paper Awards at CHI, CSCW, and USENIX Security, and third place in the Internet Defense Prize. She has interned at Google and with the Social Media Collective at Microsoft Research. She holds a B.A. from Princeton University.

    Host: Jesse Thomason

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Angelina Wang (Princeton University) - Operationalizing Responsible Machine Learning: From Equality Towards Equity

    Tue, Mar 05, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Angelina Wang, Princeton University

    Talk Title: Operationalizing Responsible Machine Learning: From Equality Towards Equity

    Abstract: With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work. I will discuss three research directions that show how, despite the technically convenient approach of considering equality acontextually, a stronger engagement with societal context allows us to operationalize a more equitable formulation. First, I will introduce a dataset tool that we developed to analyze complex, socially-grounded forms of visual bias. Then, I will provide empirical evidence to support how we should incorporate societal context in bringing intersectionality into machine learning. Finally, I will discuss how in the excitement of using LLMs for tasks like human participant replacement, we have neglected to consider the importance of human positionality. Overall, I will explore how we can expand a narrow focus on equality in responsible machine learning to encompass a broader understanding of equity that substantively engages with societal context.  
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Angelina Wang is a Computer Science PhD student at Princeton University advised by Olga Russakovsky. Her research is in the area of machine learning fairness and algorithmic bias. She has been recognized by the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship. She has published in top machine learning (ICML, AAAI), computer vision (ICCV, IJCV), interdisciplinary (Big Data & Society), and responsible computing (FAccT, JRC) venues, including spotlight and oral presentations. Previously, she has interned with Microsoft Research and Arthur AI, and received a B.S. in Electrical Engineering and Computer Science from UC Berkeley.

    Host: Bistra Dilkina

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Thesis Proposal - Shao-Hung Chan

    Tue, Mar 05, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Thesis Proposal - Shao-Hung Chan
     
    Committee members: Sven Koenig (chair), T.K. Satish Kumar, Lars Lindemann, John Carlsson, and Daniel Harabor
     
    Title: Flex Distribution for Bounded-Suboptimal Multi-Agent Path Finding
     
    Time: Mar. 5th, 2:00 PM - 3:00 PM 
    Location: EEB 349
     
     
    Abstract:
    Multi-Agent Path Finding (MAPF) is the problem of finding collision-free paths for multiple agents that minimize the sum of path costs. Explicit Estimation Conflict-Based Search (EECBS) is a leading two-level algorithm that solves MAPF bounded-suboptimally, i.e., within some factor w away from the minimum sum of path costs C*. It uses Focal Search to find bounded-suboptimal paths on the low level and Explicit Estimation Search (EES) to resolve collisions on the high level. To solve MAPF bounded-suboptimally, EES keeps track of a lower bound LB on C* to find paths whose sum of path costs is at most w times LB. However, the costs of many paths are often much smaller than w times their minimum path costs, meaning that the sum of path costs is much smaller than w times C*. Thus, in this proposal, we hypothesize that one can improve the efficiency of EECBS via Flex Distribution. That is, one can use the flex of the path costs (that relaxes the requirement to find bounded-suboptimal paths on the low level) to reduce the number of collisions that need to be resolved on the high level while still guaranteeing to solve MAPF bounded suboptimally. We also discuss the limitations of Flex Distribution and propose some techniques to overcome them.
     

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 349

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • CS Colloquium: Chang Xiao (Adobe Research) - Augmented Interaction Between Physical and Digital Realm

    Wed, Mar 06, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Chang Xiao, Adobe Research

    Talk Title: Augmented Interaction Between Physical and Digital Realm

    Series: Computer Science Colloquium

    Abstract: Today's computing devices, including mobile phones, wearable devices, and VR/AR headsets, have become increasingly powerful and accessible to almost everyone. They offer a direct and immersive interaction with digital worlds. But what if we could use these devices to access interactive physical worlds as well, expanding our interaction space and unlocking greater interactive potential? In this talk, I will discuss our work on integrating both physical and digital systems to create a new computing environment. Leveraging techniques from AI/ML, Computer Vision, and Computational Design, we propose several interactive systems and sensing techniques that provide users with unified, low-cost, tangible, and intuitive experiences. These approaches unlock the potential of using the physical environment as computer interfaces in the era of Extended Reality (XR) and spatial computing, bridging the gap between physical and digital spaces.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Chang Xiao is currently a Research Scientist at Adobe Research. He obtained his PhD from Columbia University in 2021. His broad interests lie at the intersection of HCI, AI/ML, and AR/VR, with a special focus on leveraging AI/ML to develop novel interaction and sensing techniques. His work has been published in a wide spectrum of top computer science venues, including CHI, UIST, SIGGRAPH, NeurIPS, CVPR, and ICLR. His research has gained impact beyond academia, having been successfully integrated into multiple Adobe products and receiving widespread attention, including media interviews and coverage by CNN, Adweek, CACM, and IEEE Spectrum. During his PhD studies, he received the Snap Research Fellowship and the Cheung-Kong Innovation Doctoral Fellowship.

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Dissertation Defense - Sina Shaham

    Wed, Mar 06, 2024 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Dissertation Defense - Sina Shaham  
     
    Committee: Prof. Bhaskar Krishnamachari, Prof. Cyrus Shahabi, Prof. Cauligi Raghavendra  
     
    Title: Responsible AI in SpatioTemporal Data Processing    
     
    Abstract:    In this presentation, we systematically investigate the design and development of algorithms to improve privacy and fairness in the processing of spatio-temporal data. Beginning with an essential background introduction and a review of cutting-edge advancements, the discussion progresses to introduce a novel algorithm for safeguarding privacy in the dissemination of Origin-Destination (OD) Matrices. This algorithm, rooted in Differential Privacy (DP) principles, aims to protect user privacy during the collection and sharing of OD-matrices in 2D and higher dimensions. Subsequently, our focus shifts to the domain of user energy consumption, where we develop a methodology that ensures user privacy when disclosing electricity time series to third parties and entities that may not be fully trusted. Following this, we propose an incentive-based program aimed at balancing electricity demand, taking into account socio-economic family attributes and ensuring fair treatment. Through comprehensive evaluations, the presentation demonstrates the progress made over previous works and also sheds light on potential areas for future studies, particularly in the realm of responsible handling of complex spatio-temporal data.
     
    Zoom Link: https://usc.zoom.us/j/98092705100    

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 539

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/j/98092705100

    OutlookiCal
  • CS Colloquium: Ben Lengerich (MIT) - Contextualized learning for adaptive yet persistent AI in biomedicine

    Thu, Mar 07, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Ben Lengerich, MIT

    Talk Title: Contextualized learning for adaptive yet persistent AI in biomedicine

    Series: Computer Science Colloquium

    Abstract: Machine learning models often exhibit diminished generalizability when applied across diverse biomedical contexts (e.g., across health institutions), leading to a significant discrepancy between expected and actual performance. To address this challenge, this presentation introduces "contextualized learning", a meta-learning paradigm designed to enhance model adaptability by learning meta-relationships between dataset context and statistical parameters. Using network inference as an illustrative example, I will show how contextualized learning estimates context-specific graphical models, offering insights such as personalized gene expression analysis for cancer subtyping. The talk will also discuss trends towards “contextualized understanding”, bridging statistical and foundation models to standardize interpretability. The primary aim is to illustrate how contextualized learning and understanding contribute to creating learning systems that are both adaptive and persistent, facilitating cross-context information sharing and detailed analysis.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Ben Lengerich is a Postdoctoral Associate and Alana Fellow at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Broad Institute of MIT and Harvard, where he is advised by Manolis Kellis. His research in machine learning and computational biology emphasizes the use of context-adaptive models to understand complex diseases and advance precision medicine. Through his work, Ben aims to bridge the gap between data-driven insights and actionable medical interventions. He holds a PhD in Computer Science and MS in Machine Learning from Carnegie Mellon University, where he was advised by Eric Xing. His work has been recognized with spotlight presentations at conferences including NeurIPS, ISMB, AMIA, and SMFM, financial support from the Alana Foundation, and recognition as a "Rising Star in Data Science” by the University of Chicago and UC San Diego.

    Host: Willie Neiswanger

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: TBA

    Mon, Mar 18, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: TBA, TBA

    Talk Title: TBA

    Series: Computer Science Colloquium

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Machine Learning Center Seminar: Lily Weng (UC San Diego) - Towards Interpretable Deep Learning

    Mon, Mar 18, 2024 @ 12:00 PM - 01:30 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Lily Weng, UC San Diego

    Talk Title: Towards Interpretable Deep Learning

    Series: Machine Learning Center Seminar Series

    Abstract: Deep neural networks (DNNs) have achieved unprecedented success across many scientific and engineering fields in the last decades. Despite its empirical success, however, they are notoriously black-box models that are difficult to understand their decision process. Lacking interpretability is one critical issue that may seriously hinder the deployment of DNNs in high-stake applications, which need interpretability to trust the prediction, to understand potential failures, and to be able to mitigate harms and eliminate biases in the model.     
     
    In this talk, I'll share some exciting results in my lab on advancing explainable AI and interpretable machine learning. Specifically, I will show how we could bring interpretability into deep learning by leveraging recent advances in multi-modal models. I'll present two recent works [1,2] in our group on demystifying neural networks and interpretability-guided neural network design, which are the important first steps to enable Trustworthy AI and Trustworthy Machine Learning. I will also briefly overview our other recent efforts on Trustworthy Machine Learning and automated explanations for LLMs [3].     
     
    [1] Oikarinen and Weng, CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks, ICLR 23 (spotlight)
    [2] Oikarinen, Das, Nguyen and Weng, Label-Free Concept Bottleneck Models, ICLR 23
    [3] Lee, Oikarinen etal, The Importance of Prompt Tuning for Automated Neuron Explanations, NeurIPS 23 ATTRIB workshop

    Biography: Lily Weng is an Assistant Professor in the Halicioglu Data Science Institute at UC San Diego. She received her PhD in Electrical Engineering and Computer Sciences (EECS) from MIT in August 2020, and her Bachelor and Master degree both in Electrical Engineering at National Taiwan University. Prior to UCSD, she spent 1 year in MIT-IBM Watson AI Lab and several research internships in Google DeepMind, IBM Research and Mitsubishi Electric Research Lab. Her research interest is in machine learning and deep learning, with primary focus on trustworthy AI. Her vision is to make the next generation AI systems and deep learning algorithms more robust, reliable, explainable, trustworthy and safer. For more details, please see https://lilywenglab.github.io/.

    Host: Yan Liu

    Location: Ronald Tutor Hall of Engineering (RTH) - 306

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • CS Colloquium: Sherry Yang - Decision Making with Internet-Scale Knowledge

    Tue, Mar 19, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Sherry Yang, UC Berkeley

    Talk Title: Decision Making with Internet-Scale Knowledge

    Abstract: Machine learning models pretrained on internet data have acquired broad knowledge about the world but struggle to solve complex tasks that require extended reasoning and planning. Sequential decision making, on the other hand, has empowered AlphaGo’s superhuman performance, but lacks visual, language, and physical knowledge about the world. In this talk, I will present my research towards enabling decision making with internet-scale knowledge. First, I will illustrate how language models and video generation are unified interfaces that can integrate internet knowledge and represent diverse tasks, enabling the creation of a generative simulator to support real-world decision-making. Second, I will discuss my work on designing decision making algorithms that can take advantage of generative language and video models as agents and environments. Combining pretrained models with decision making algorithms can effectively enable a wide range of applications such as developing chatbots, learning robot policies, and discovering novel materials.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Sherry is a final year PhD student at UC Berkeley advised by Pieter Abbeel and a senior research scientist at Google DeepMind. Her research aims to develop machine learning models with internet-scale knowledge to make better-than-human decisions. To this end, she has developed techniques for generative modeling and representation learning from large-scale vision, language, and structured data, coupled with developing algorithms for sequential decision making such as imitation learning, planning, and reinforcement learning. Sherry initiated and led the Foundation Models for Decision Making workshop at NeurIPS 2022 and 2023, bringing together research communities in vision, language, planning, and reinforcement learning to solve complex decision making tasks at scale.  Before her current role, Sherry received her Bachelor’s degree and Master’s degree from MIT advised by Patrick Winston and Julian Shun.

    Host: Dani Yogatama

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Thesis Proposal - Yuzhong Huang

    Tue, Mar 19, 2024 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee Members: Fred Morstatter (Chair), Yue Wang, Aiichiro Nakano, & Antonio Ortega
     
    Date & Time: Tuesday, March 19, 2024, 11:00 a.m. - 12:00 p.m. (PST) - PHE 325   
     
    Title: Explicit Control in the Understanding and Generation of 3D world
     
    Abstract: Understanding and recreating our living environment has been a key topic in scientific research, ranging from virtual reality, autonomous driving, and generative AI tools. Recent advancements have significantly improved machine model’s capability to recognize and generate visually similar 3D objects. However, existing approaches often lack explicit control mechanisms, limiting their adaptability and interpretability. This thesis proposal addresses this gap by focusing on three crucial aspects: (1) Explicit control in understanding 3D worlds, achieved through the imposition of planar priors and plane-splatting volume rendering method. (2) Explicit control in generating 3D worlds, enabled by an orientation-conditioned diffusion model. (3) Explicit control in modifying 3D objects, enabled by projecting text-guided 2D segmentation map onto 3D models. These advancements pave the way for more intuitive and precise manipulation of 3D environments.
     
    Zoom Link: https://usc.zoom.us/j/99330288526

    Location: Charles Lee Powell Hall (PHE) - 325

    WebCast Link: https://usc.zoom.us/j/99330288526

    Audiences: Everyone Is Invited

    Contact: Thomas Lord Department of Computer Science

    Event Link: https://usc.zoom.us/j/99330288526

    OutlookiCal
  • PhD Thesis Proposal- Yuzhong Huang

    Tue, Mar 19, 2024 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Student Activity


    PhD Thesis Proposal- Yuzhong Huang

    Title: Explicit Control in the Understanding and Generation of 3D world
    Committee Members: Fred Morstatter (Chair), Yue Wang, Aiichiro Nakano, Antonio Ortega
     


    Abstract: 


    Understanding and recreating our living environment has been a key topic in scientific research, ranging from virtual reality, autonomous driving, and generative AI tools. Recent advancements have significantly improved machine model’s capability to recognize and generate visually similar 3D objects.


    However, existing approaches often lack explicit control mechanisms, limiting their adaptability and interpretability. This thesis proposal addresses this gap by focusing on three crucial aspects: (1) Explicit control in understanding 3D worlds, achieved through the imposition of planar priors and plane-splatting volume rendering method. (2) Explicit control in generating 3D worlds, enabled by an orientation-conditioned diffusion model. (3) Explicit control in modifying 3D objects, enabled by projecting text-guided 2D segmentation map onto 3D models.


    These advancements pave the way for more intuitive and precise manipulation of 3D environments.

    Location: Charles Lee Powell Hall (PHE) - 325

    Audiences: Everyone Is Invited

    Contact: Yuzhong Huang

    Event Link: https://usc.zoom.us/j/99330288526

    OutlookiCal
  • CS Colloquium: Mengyuan Li - Confidential Computing and Trusted Execution Environment: Challenges, Opportunities, and the Future

    Wed, Mar 20, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Mengyuan Li, MIT

    Talk Title: Confidential Computing and Trusted Execution Environment: Challenges, Opportunities, and the Future

    Abstract: Confidential Computing, or Trusted Execution Environment (TEE), represents a cutting-edge design in server-grade CPUs. This technology acts as a protective shield for cloud tasks, safeguarding the confidentiality and integrity of cloud workloads against a range of threats, including attacks from privileged software, physical attackers, and untrustworthy hypervisors. As the demand for secure private data handling continues to rise, the adoption of Confidential Computing has become widespread across various industries. Evidence of this includes the adoption of TEE in server-grade CPUs from major vendors like Intel, AMD, and ARM. Furthermore, leading cloud service providers, such as AWS, Google Cloud, Microsoft Azure, and IBM Cloud, now offer commercial Confidential Computing services.   In this talk, I will outline my contributions to the study of complex, heterogeneous Confidential Computing systems. I will share my insights into two real-world vulnerabilities we uncovered within commercial Confidential Computing systems, along with our joint efforts with CPU manufacturers to address these issues in the latest server-grade CPUs. At the hardware design level, I will discuss a novel ciphertext side-channel attack targeting hardware-accelerated memory encryption, which is a crucial hardware feature to protect the memory of cloud workloads. Moving to the software system design level, I will illustrate how inadequately designed TEE operating systems can pose a threat to the security of Confidential VMs. Finally, I will outline my ongoing efforts and future directions in enhancing the security and effectiveness of Confidential Computing and my research vision towards building secure and performant hardware systems.        This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Mengyuan Li is a postdoctoral researcher at CSAIL, MIT, under the guidance of Prof. Mengjia Yan. His research focuses on bringing security and trust to hardware systems, with a recent concentration in Confidential Cloud Computing and Trusted Execution Environments. To this end, he has identified real-world hardware vulnerabilities in commodity CPUs, which have been acknowledged by manufacturers through hardware CVEs and several security bulletins. Additionally, he has collaborated closely with industry teams such as AMD, Intel, WolfSSL, and Alibaba Cloud to develop mitigations and design commercial trustworthy hardware systems. His research findings have been published in top security and privacy venues, including S&P, Usenix Security, and CCS, and have been recognized by the CCS 2021 Best Paper Runner-up Award. Before MIT, Mengyuan earned his Ph.D. in Computer Science and Engineering from The Ohio State University (OSU) in 2022.

    Host: Seo Jin Park

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Computer Science General Faculty Meeting

    Wed, Mar 20, 2024 @ 12:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Bi-Weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Location: Hedco Neurosciences Building (HNB) - 107

    Audiences: Invited Faculty Only

    Contact: Assistant to CS Chair

    OutlookiCal
  • CS Colloquium: Andrew Ilyas - Making machine learning predictably reliable

    Thu, Mar 21, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Andrew Ilyas, MIT

    Talk Title: Making machine learning predictably reliable

    Abstract: Despite ML models' impressive performance, training and deploying them is currently a somewhat messy endeavor. But does it have to be? In this talk, I overview my work on making ML “predictably reliable”---enabling developers to know when their models will work, when they will fail, and why.To begin, we use a case study of adversarial inputs to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a line of work that aims to develop a precise understanding of the ML pipeline, combining statistical tools with large-scale experiments to characterize the role of each individual design choice: from how to collect data, to what dataset to train on, to what learning algorithm to use.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Andrew Ilyas is a PhD student in Computer Science at MIT, where he is advised by Aleksander Madry and Constantinos Daskalakis. His research aims to improve the reliability and predictability of machine learning systems. He was previously supported by an Open Philanthropy AI Fellowship.

    Host: Vatsal Sharan

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Thesis Defense - Kushal Chawla

    Thu, Mar 21, 2024 @ 01:30 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Defense - Kushal Chawla  
     
    Title: Computational Foundations for Mixed-Motive Human-Machine Dialogue  
     
    Committee Members:   Gale Lucas (Chair), Jonathan Gratch, Jonathan May, Peter Kim, Maja Mataric  
     
    Abstract:    Success in a mixed-motive interaction demands a balance between self-serving and other-serving behaviors. For instance, in a typical negotiation, a player must balance maximizing their own goals with the goals of their partner so as to come to an agreement. If the player asks for too much, this can push the partner to walk away without an agreement, hence, hurting the outcomes for all the parties involved. Such interactions are ubiquitous in everyday life, from deciding who performs household chores to customer support and high-stakes business deals. Consequently, AI tools capable of comprehending and participating in such mixed-motive or other social influence interactions (such as argumentation or therapy) find broad applications in pedagogy and conversational AI.  
     
    In this thesis, we present our foundational work for enabling mixed-motive human-machine dialogue. I will discuss our progress in three key areas: 1) The design of a novel task and dataset of grounded human-human negotiations that has fueled our investigations into the impact of emotion expression and linguistic strategies, 2) Techniques for mixed motive dialogue systems that learn to strike a balance between self and partner interests, and 3) Promoting a research community for dedicated efforts and discussion in this area.      
     
     
    https://usc.zoom.us/j/96411089883?pwd=WDNuMjF1NDNTTXV5cDdGaWJzOG9Gdz09

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 110

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/j/96411089883?pwd=WDNuMjF1NDNTTXV5cDdGaWJzOG9Gdz09

    OutlookiCal
  • PhD Dissertation Defense - Arvin Hekmati

    Thu, Mar 21, 2024 @ 02:30 PM - 04:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Dissertation Defense - Arvin Hekmati  
     
    Committee:  Prof. Bhaskar Krishnamachari (Chair), Prof. Cauligi Raghavendra, and  Prof. Aiichiro Nakano     
     
    Title: AI-Enabled DDoS Attack Detection in IoT Systems    
     
    Abstract:
    "In this thesis, we develop AI-enabled mechanisms for detecting Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) systems. We introduce a novel, tunable DDoS attack model that emulates benign IoT device behavior using a truncated Cauchy distribution. We investigate these futuristic DDoS attacks that use large numbers of IoT devices and camouflage their attack by having each node transmit at a volume typical of benign traffic. We propose innovative correlation-aware, learning-based frameworks that leverage IoT node correlation data for enhanced detection accuracy. We extensively analyze the proposed architectures by evaluating five different neural network models trained on a dataset derived from a 4060-node real-world IoT system. We observe that long short-term memory (LSTM) and a transformer-based model, in conjunction with the architectures that use correlation information of the IoT nodes, provide higher detection performance than the other models and architectures, especially when the attacker camouflages itself by following benign traffic distribution on each IoT node. We evaluated our findings through practical implementation on a Raspberry Pi-based testbed. In order to address the challenge of leveraging massive IoT device arrays for DDoS attacks, we introduce heuristic solutions for selective correlation information sharing among IoT devices. To overcome the challenge of fixed input limitations in conventional machine learning, we propose a model based on the Graph Convolutional Network (GCN) to manage incomplete data in IoT devices caused by network losses. We introduce various IoT device graph topologies, including Network, Peer-to-Peer, and Hybrid topologies with scenarios of both directed and undirected edges. Our simulations reveal that the Hybrid topology, employing correlation-based peer-to-peer undirected edges, achieves the highest detection performance with at most 2% drop in the performance despite a 50% network connection loss, highlighting the proposed GCN-based model's effectiveness in detecting DDoS attacks under lossy network conditions. Finally, we explore the application of Large Language Models (LLMs) for detecting DDoS attacks and explaining the detection rationale, demonstrating the potential of fine-tuning and few-shot prompt engineering methods to achieve high accuracy and provide insightful detection reasoning."

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Ellecia Williams

    Event Link: https://usc.zoom.us/j/4677088430 

    OutlookiCal
  • USC Symposium on Frontiers of Generative AI Models in Science and Society

    Mon, Mar 25, 2024 @ 08:30 AM - 06:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Various, USC Machine Learning Center

    Talk Title: USC Symposium on Frontiers of Generative AI Models in Science and Society

    Abstract: The USC Machine Learning Center and Computer Science Department is excited to host the syposium on "Frontiers of Generative AI Models in Science and Society". Experts in generative AI models will discuss recent progresses and their applications in science and soceity.    
     
    Keynote Speakers: Alessandro Vespignani (Northeastern University), Nitesh Chawla (Notre Dame), Yizhou Sun (UCLA), & Jian Ma (CMU)    
     
    Spotlight Speakers: Jieyu Zhao, Robin Jia, Yue Wang, Vatsal Sharan, & Ruishan Liu (USC Thomas Lord Department of Computer Science)

    Host: USC Machine Learning Center

    More Info: https://www.eventbrite.com/e/usc-symposium-on-frontiers-of-generative-ai-models-in-science-and-society-tickets-860269668737?aff=oddtdtcreator

    Location: Michelson Center for Convergent Bioscience (MCB) - 101

    Audiences: Everyone Is Invited

    Contact: Thomas Lord Department of Computer Science

    Event Link: https://www.eventbrite.com/e/usc-symposium-on-frontiers-of-generative-ai-models-in-science-and-society-tickets-860269668737?aff=oddtdtcreator

    OutlookiCal
  • CS Colloquium: Junzhe Zhang - Towards Causal Reinforcement Learning

    Mon, Mar 25, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Junzhe Zhang, Columbia University

    Talk Title: Towards Causal Reinforcement Learning

    Abstract: Causal inference provides a set of principles and tools that allows one to combine data and knowledge about an environment to reason with questions of a counterfactual nature - i.e., what would have happened if the reality had been different - even when no data of this unrealized reality is currently available. Reinforcement learning provides a collection of methods that allows the agent to reason about optimal decision-making under uncertainty by trial and error - i.e., what would the consequences (e.g., subsequent rewards, states) be had the action been different? While these two disciplines have evolved independently and with virtually no interaction, they operate over various aspects of the same building block, i.e., counterfactual reasoning, making them umbilically connected.   This talk will present a unified theoretical framework, called causal reinforcement learning, that explores the nuanced interplays between causal inference and reinforcement learning. I will discuss a recent breakthrough in partial identification that allows one to infer unknown causal effects from a combination of model assumptions and available data. Delving deeper, I will then demonstrate how this method could be applicable to address some practical challenges in classic reinforcement learning tasks, including robust off-policy evaluation from confounded observations and accelerating online learning with offline data.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Junzhe Zhang is a postdoctoral research scientist in the Causal AI lab at Columbia University. He obtained his doctoral degree in Computer Science at Columbia University, advised by Elias Bareinboim. His research centers on causal inference theory and its applications in reinforcement learning, algorithmic fairness, and explainability. His works have been selected for oral presentations in top refereed venues such as NeurIPS.

    Host: Sven Koenig

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Ph.D. Thesis Defense - Ali Omrani

    Mon, Mar 25, 2024 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Ph.D. Thesis Defense - Ali Omrani
     
    Committee: Morteza Dehghani (Chair),  Xiang Ren, Robin Jia, Payam Piray, and Jeffrey Sorensen 
     
    Title: Countering Problematic Content in Digital Space: Bias Reduction and Dynamic Content Adaptation
     
    Abstract:   Problematic content, such as hate speech, poses a significant challenge to society, leading to discrimination and exclusion while undermining inclusivity and well-being. This thesis proposal outlines my efforts to create adaptable solutions for combating problematic content in digital space through a theory-motivated approach that bridges language technology and social sciences. I will begin by presenting an innovative group-agnostic method for bias mitigation in language models, which is grounded in a deep understanding of stereotyping from social psychology. Subsequently, I will introduce a novel continual learning framework for problematic content detection that captures the ever-evolving nature of this issue. Afterward, I discuss my work that extends this framework to multilingual settings, with a specific emphasis on two key aspects: 1. Harnessing cultural diversity for cross-lingual transfer of offensive language detection and 2. Investigating the challenges posed by disparities in data quality across various languages.Date and Time: March 25th, 2:00 PM - 4:00 PM
    Location:  Room 266, USC Brain and Creativity Institute 605, 3620 McClintock Ave, Los Angeles, CA 90089
     
     

    Location: Dornsife Neuroscience Imaging Center (DNI) - 266

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • CS Colloquium: Xiang Anthony Chen - Catalyzing AI Advances with Human-Centered Interactive Systems

    Tue, Mar 26, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Xiang Anthony Chen, UCLA

    Talk Title: Catalyzing AI Advances with Human-Centered Interactive Systems

    Abstract: Despite the unprecedented advances in AI, there has always been a gap between how well an AI model performs and how such performance can serve humanity. In this seminar, I will describe my past work to close this gap. Specifically, I develop human-centered interactive systems that catalyze advances in AI to achieve three levels of objectives: aligning with human values, assimilating human intents, and augmenting human abilities. Further, I will discuss my ongoing and future research, focused on AI for scientific discovery, AI with Theory of Mind, and AI-mediated human communication.     This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Xiang ‘Anthony' Chen is an Assistant Professor in UCLA's Department of Electrical & Computer Engineering. He received a Ph.D. in the School of Computer Science at Carnegie Mellon University. Anthony's area of expertise is Human-Computer Interaction (HCI). His research employs human-centered design methods to build systems that catalyze advances in AI to better serve humanity, supported by NSF CAREER Award, ONR YIP Award, Google Research Scholar Award, Intel Rising Star Award, Hellman Fellowship, NSF CRII Award, and Adobe Ph.D. Fellowship. Anthony’s work has resulted in 55+ publications with three best paper awards and three honorable mentions in top-tier HCI conferences.

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CAIS Webinar: Dr. Jessica Ridgway (University of Chicago) - Predictive Analytics for Engagement in HIV Care

    Tue, Mar 26, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dr. Jessica Ridgway, University of Chicago

    Talk Title: Predictive Analytics for Engagement in HIV Care

    Abstract: Engagement in care is essential for the health of people with HIV, but only half of people with HIV in the U.S. receive regular medical care. Dr. Ridgway will discuss her research utilizing machine learning models based on electronic medical record data to predict engagement in care among people with HIV. She has developed machine learning models using structured data as well as natural language processing of unstructured clinical notes. She will discuss challenges and pitfalls in utilizing electronic medical record data for HIV-related predictive modeling, as well as implications for implementation in clinical practice.
     
    This lecture satisfies requirements for CSCI 591: Research Colloquium.

    Biography: Jessica Ridgway, MD, MS, is an Associate Professor of Medicine in the Section of Infectious Diseases and Global Health and Director of Medical Informatics at the University of Chicago. She is Director of Predictive Analytics for the Chicago Center for HIV Elimination. Her research focuses on utilizing large electronic medical record databases to understand HIV epidemiology across the continuum of care and implementation of clinical informatics interventions to improve HIV care and prevention.

    Host: USC Center for Artificial Intelligence in Society (CAIS)

    More Info: https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    Location: Zoom only - https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/webinar/register/WN_gEn8OHXBQnmpYiWc9hJimw

    OutlookiCal
  • PhD Dissertation Defense - Aniruddh Puranic

    Tue, Mar 26, 2024 @ 03:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Dissertation Defense - Aniruddh Puranic   Committee: Jyotirmoy V. Deshmukh (Chair), Gaurav Sukhatme, Stefanos Nikolaidis, and Stephen Tu     Title: Sample-Efficient and Robust Neurosymbolic Learning from Demonstrations     Abstract: Learning-from-demonstrations (LfD) is a popular paradigm to obtain effective robot control policies for complex tasks via reinforcement learning (RL) without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and interpretability in the learned control policies. To address these issues, this thesis develops a neurosymbolic learning framework which is a hybrid method that integrates neural network-based learning with symbolic (e.g., rule, logic, graph) reasoning to leverage the strengths of both approaches. Specifically, this framework uses Signal Temporal Logic (STL) to express high-level robotic tasks and its quantitative semantics to evaluate and rank the quality of demonstrations. Temporal logic-based specifications allow us to create non-Markovian rewards and are also capable of defining interesting causal dependencies between tasks such as sequential task specifications. This dissertation presents the LfD-STL framework that learns from even suboptimal/imperfect demonstrations and STL specifications to infer reward functions; these reward functions can then be used by reinforcement learning algorithms to obtain control policies. Experimental evaluations on several diverse set of environments show that the additional information in the form of formally specified task objectives allows the framework to outperform prior state-of-the-art LfD methods.     Many real-world robotic tasks consist of multiple objectives (specifications), some of which may be inherently competitive, thus prompting the need for deliberate trade-offs. This dissertation then further extends the LfD-STL framework by a developing metric - performance graph - which is a directed graph that utilizes the quality of demonstrations to provide intuitive explanations about the performance and trade-offs of demonstrated behaviors. This performance graph also offers concise insights into the learning process of the RL agent, thereby enhancing interpretability, as corroborated by a user study. Finally, the thesis discusses how the performance graphs can be used as an optimization objective to guide RL agents to potentially learn policies that perform better than the (imperfect) demonstrators via apprenticeship learning (AL). The theoretical machinery developed for the AL-STL framework examines the guarantees on safety and performance of RL agents.   https://usc.zoom.us/j/98964159897?pwd=a2ljaGNEOGcvMkl1WU9yZENPc0M1dz09

    Location: Ronald Tutor Hall of Engineering (RTH) - 306

    Audiences: Everyone Is Invited

    Contact: Aniruddh Puranic

    Event Link: https://usc.zoom.us/j/98964159897?pwd=a2ljaGNEOGcvMkl1WU9yZENPc0M1dz09

    OutlookiCal
  • CS Colloquium: Paul Liang - Foundations of Multisensory Artificial Intelligence

    Wed, Mar 27, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Paul Liang, CMU

    Talk Title: Foundations of Multisensory Artificial Intelligence

    Abstract: Building multisensory AI systems that learn from multiple sensory inputs such as text, speech, video, real-world sensors, wearable devices, and medical data holds great promise for impact in many scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. In this talk, I will discuss my research on the machine learning principles of multisensory intelligence, as well as practical methods for building multisensory foundation models over many modalities and tasks. In the first half, I will present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets and design principled approaches to learn these interactions. In the second part, I will present my work in cross-modal attention and multimodal transformer architectures that now underpin many of today’s multimodal foundation models. Finally, I will discuss our collaborative efforts in scaling AI to many modalities and tasks for real-world impact on mental health, cancer prognosis, and robot control.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Paul Liang is a Ph.D. student in Machine Learning at CMU, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. He studies the machine learning foundations of multisensory intelligence to design practical AI systems that integrate, learn from, and interact with a diverse range of real-world sensory modalities. His work has been applied in affective computing, mental health, pathology, and robotics. He is a recipient of the Siebel Scholars Award, Waibel Presidential Fellowship, Facebook PhD Fellowship, Center for ML and Health Fellowship, Rising Stars in Data Science, and 3 best paper/honorable mention awards at ICMI and NeurIPS workshops. Outside of research, he received the Alan J. Perlis Graduate Student Teaching Award for instructing courses on multimodal ML and advising students around the world in directed research.

    Host: Willie Neiswanger / Xiang Ren

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Computer Science General Faculty Meeting

    Wed, Mar 27, 2024 @ 12:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Bi-Weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Location: Ronald Tutor Hall of Engineering (RTH) - 526

    Audiences: Invited Faculty Only

    Contact: Assistant to CS Chair

    OutlookiCal
  • PhD Thesis Proposal- Xin Qin

    Wed, Mar 27, 2024 @ 12:45 PM - 01:45 PM

    Thomas Lord Department of Computer Science

    Student Activity


    PhD Thesis Proposal- Xin Qin
    Title: Data-driven and Logic-based Analysis of Learning-enabled Cyber-Physical Systems
    Committee: Jyotirmoy Deshmukh, Chao Wang, Souti Chattopadhyay, Yan Liu and Paul Bogdan
     

    Abstract: Rigorous analysis of cyber-physical systems (CPS) is becoming increasingly important, especially for safety-critical applications that use learning-enabled components. In this proposal, we will discuss various pieces of a broad framework that enable scalable reasoning techniques tuned to modern software design practices in autonomous CPS applications. The proposal will center around three main pillars: (1) Statistical verification techniques to give probabilistic guarantees on system correctness; here, we treat the underlying CPS application as a black-box and use distribution-free and model-free techniques to provide probabilistic correctness guarantees. (2) Predictive monitoring techniques that use physics-based or data-driven models of the system to continuously monitor logic-based requirements of systems operating in highly uncertain environments; this allows us to design runtime mitigation approaches to take corrective actions before a safety violation can occur. (3) Robust testing for CPS using reinforcement learning. We train an agent to produce a policy to initiate unsafe behaviors in similar target systems without the need for retraining, thereby allowing for the elicitation of faulty behaviors across various systems.  The proposal hopes to demonstrate the scalability of our approaches on various realistic models of autonomous systems.

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 349

    Audiences: Everyone Is Invited

    Contact: Xin Qin

    OutlookiCal
  • CS Colloquium: Teodora Baluta - New Algorithmic Tools for Rigorous Machine Learning Security Analysis

    Wed, Mar 27, 2024 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Teodora Baluta, National University of Singapore

    Talk Title: New Algorithmic Tools for Rigorous Machine Learning Security Analysis

    Abstract: Machine learning security is an emerging area with many open questions lacking systematic analysis. In this talk, I will present three new algorithmic tools to address this gap: (1) algebraic proofs; (2) causal reasoning; and (3) sound statistical verification. Algebraic proofs provide the first conceptual mechanism to resolve intellectual property disputes over training data. I show that stochastic gradient descent, the de-facto training procedure for modern neural networks, is a collision-resistant computation under precise definitions. These results open up connections to lattices, which are mathematical tools used for cryptography presently. I will also briefly mention my efforts to analyze causes of empirical privacy attacks and defenses using causal models, and to devise statistical verification procedures with ‘probably approximately correct’ (PAC)-style soundness guarantees.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Teodora Baluta is a Ph.D. candidate in Computer Science at the National University of Singapore. She enjoys working on security problems that are both algorithmic in nature and practically relevant. She is one of the EECS Rising Stars 2023, a Google PhD Fellow, a Dean’s Graduate Research Excellence Award recipient and a President’s Graduate Fellowship recipient at NUS. She interned at Google Brain working in the Learning for Code team. Her works are published in security (CCS, NDSS), programming languages/verification conferences (OOPSLA, SAT), and software engineering conferences (ICSE, ESEC/FSE). More details are available on her webpage: https://urldefense.com/v3/__https://teobaluta.github.io/__;!!LIr3w8kk_Xxm!pCgCXC327otABpiCTruPDSq7pyOXJEWhQ5X0UekIkZhAzt8Q0u0y5QtnemfzYURw7fop1LHm8tR_SY5JCA$ .

    Host: Mukund Raghothaman

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • CS Colloquium: Yangsibo Huang - Auditing Policy Compliance in Machine Learning Systems

    Thu, Mar 28, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Yangsibo Huang, Princeton University

    Talk Title: Auditing Policy Compliance in Machine Learning Systems

    Abstract: As the capabilities of large-scale machine learning models expand, so too do their associated risks. There is an increasing demand for policies that mandate these models to be safe, privacy-preserving, and transparent regarding data usage. However, there are significant challenges with developing enforceable policies and translating the qualitative mandates into quantitative, auditable, and actionable criteria. In this talk, I will present my work on addressing the challenges.  I will first share my exploration of privacy leakage and mitigation strategies in distributed training. Then, I will explore strategies for auditing compliance with data transparency regulations. I will also examine methods to quantify and assess the fragility of safety alignments in Large Language Models. Finally, I will discuss my plans for future research directions, including collaboration with policy researchers and policymakers.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Yangsibo Huang is a Ph.D. candidate and Wallace Memorial Fellow at Princeton University.  She has been doing research at the intersection of machine learning, systems, and policy, with a focus on auditing and improving machine learning systems’ compliance with policies, from the perspectives of privacy, safety, and data usage. She interned at Google AI, Meta AI, and Harvard Medical School and was named an EECS rising star in 2023.   

    Host: Yue Zhao

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Dissertation Defense - Chuizheng Meng

    Thu, Mar 28, 2024 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee Members: Yan Liu (Chair), Willie Neiswanger, and Assad A Oberai (external member)
     
    Title: Trustworthy Spatiotemporal Prediction Models
     
    Abstract: With the great success of data-driven machine learning methods, concerns with the trustworthiness of machine learning models have been emerging in recent years. From the modeling perspective, the lack of trustworthiness amplifies the effect of insufficient training data. Purely data-driven models without constraints from domain knowledge tend to suffer from over-fitting and losing the generalizability of unseen data. Meanwhile, concerns with data privacy further obstruct the availability of data from more providers. On the application side, the absence of trustworthiness hinders the application of data-driven methods in domains such as spatiotemporal forecasting, which involves data from critical applications including traffic, climate, and energy. My dissertation constructs spatiotemporal prediction models with enhanced trustworthiness from both the model and the data aspects. For model trustworthiness, the dissertation focuses on improving the generalizability of models via the integration of physics knowledge. For data trustworthiness, the proposal proposes a spatiotemporal forecasting model in the federated learning context, where data in a network of nodes is generated locally on each node and remains decentralized. Furthermore, the dissertation amalgamates the trustworthiness from both aspects and combines the generalizability of knowledge-informed models with the privacy preservation of federated learning for spatiotemporal modeling.

    Location: Waite Phillips Hall Of Education (WPH) - B26

    Audiences: Everyone Is Invited

    Contact: Chuizheng Meng

    OutlookiCal
  • CS Colloquium: Ram Sundara Raman - Global Investigation of Network Connection Tampering

    Thu, Mar 28, 2024 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Ram Sundara Raman, University of Michigan

    Talk Title: Global Investigation of Network Connection Tampering

    Abstract: As the Internet's user base and criticality of online services continue to expand daily, powerful adversaries like Internet censors are increasingly monitoring and restricting Internet traffic. These adversaries, powered by advanced network technology, perform large-scale connection tampering attacks seeking to prevent users from accessing specific online content, compromising Internet availability and integrity. In recent years, we have witnessed recurring censorship events affecting Internet users globally, with far-reaching social, financial, and psychological consequences, making them important to study. However, characterizing tampering attacks at the global scale is an extremely challenging problem, given intentionally opaque practices by adversaries, varying tampering mechanisms and policies across networks, evolving environments, sparse ground truth, and safety risks in collecting data. In this talk, I will describe my research on building empirical methods to characterize connection tampering globally and investigate the network technology enabling tampering. First, I will describe a modular design for the Censored Planet Observatory that enables it to remotely and sustainably measure Internet censorship longitudinally in more than 200 countries. I will introduce time series analysis methods to detect key censorship events in longitudinal Censored Planet data, and reveal global censorship trends. I will also briefly describe methods to detect connection tampering using purely passive data. Next, I will introduce novel network measurement methods for locating and examining network devices that perform censorship. Finally, I will describe exciting ongoing and future research directions, such as building intelligent measurement platforms.    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Ram Sundara Raman is a PhD candidate in Computer Science and Engineering at the University of Michigan, advised by Prof. Roya Ensafi. His research lies in the intersection of computer security, privacy, and networking, employing empirical methods to study large-scale Internet attacks. Ram has been recognized as a Rising Star at the Workshop on Free and Open Communications on the Internet (FOCI), and was awarded the IRTF Applied Networking Research Prize in 2023. His work has helped produce one of the biggest active censorship measurement platforms, the Censored Planet Observatory, and has helped prevent large-scale attacks on end-to-end encryption.

    Host: Jyo Deshmukh

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal