Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Events for the 4th week of March

  • CS Colloquium: TBA

    Mon, Mar 18, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: TBA, TBA

    Talk Title: TBA

    Series: Computer Science Colloquium

    Host: Heather Culbertson

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Machine Learning Center Seminar: Lily Weng (UC San Diego) - Towards Interpretable Deep Learning

    Mon, Mar 18, 2024 @ 12:00 PM - 01:30 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Lily Weng, UC San Diego

    Talk Title: Towards Interpretable Deep Learning

    Series: Machine Learning Center Seminar Series

    Abstract: Deep neural networks (DNNs) have achieved unprecedented success across many scientific and engineering fields in the last decades. Despite its empirical success, however, they are notoriously black-box models that are difficult to understand their decision process. Lacking interpretability is one critical issue that may seriously hinder the deployment of DNNs in high-stake applications, which need interpretability to trust the prediction, to understand potential failures, and to be able to mitigate harms and eliminate biases in the model.     
     
    In this talk, I'll share some exciting results in my lab on advancing explainable AI and interpretable machine learning. Specifically, I will show how we could bring interpretability into deep learning by leveraging recent advances in multi-modal models. I'll present two recent works [1,2] in our group on demystifying neural networks and interpretability-guided neural network design, which are the important first steps to enable Trustworthy AI and Trustworthy Machine Learning. I will also briefly overview our other recent efforts on Trustworthy Machine Learning and automated explanations for LLMs [3].     
     
    [1] Oikarinen and Weng, CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks, ICLR 23 (spotlight)
    [2] Oikarinen, Das, Nguyen and Weng, Label-Free Concept Bottleneck Models, ICLR 23
    [3] Lee, Oikarinen etal, The Importance of Prompt Tuning for Automated Neuron Explanations, NeurIPS 23 ATTRIB workshop

    Biography: Lily Weng is an Assistant Professor in the Halicioglu Data Science Institute at UC San Diego. She received her PhD in Electrical Engineering and Computer Sciences (EECS) from MIT in August 2020, and her Bachelor and Master degree both in Electrical Engineering at National Taiwan University. Prior to UCSD, she spent 1 year in MIT-IBM Watson AI Lab and several research internships in Google DeepMind, IBM Research and Mitsubishi Electric Research Lab. Her research interest is in machine learning and deep learning, with primary focus on trustworthy AI. Her vision is to make the next generation AI systems and deep learning algorithms more robust, reliable, explainable, trustworthy and safer. For more details, please see https://lilywenglab.github.io/.

    Host: Yan Liu

    Location: Ronald Tutor Hall of Engineering (RTH) - 306

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • CS Colloquium: Sherry Yang - Decision Making with Internet-Scale Knowledge

    Tue, Mar 19, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Sherry Yang, UC Berkeley

    Talk Title: Decision Making with Internet-Scale Knowledge

    Abstract: Machine learning models pretrained on internet data have acquired broad knowledge about the world but struggle to solve complex tasks that require extended reasoning and planning. Sequential decision making, on the other hand, has empowered AlphaGo’s superhuman performance, but lacks visual, language, and physical knowledge about the world. In this talk, I will present my research towards enabling decision making with internet-scale knowledge. First, I will illustrate how language models and video generation are unified interfaces that can integrate internet knowledge and represent diverse tasks, enabling the creation of a generative simulator to support real-world decision-making. Second, I will discuss my work on designing decision making algorithms that can take advantage of generative language and video models as agents and environments. Combining pretrained models with decision making algorithms can effectively enable a wide range of applications such as developing chatbots, learning robot policies, and discovering novel materials.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Sherry is a final year PhD student at UC Berkeley advised by Pieter Abbeel and a senior research scientist at Google DeepMind. Her research aims to develop machine learning models with internet-scale knowledge to make better-than-human decisions. To this end, she has developed techniques for generative modeling and representation learning from large-scale vision, language, and structured data, coupled with developing algorithms for sequential decision making such as imitation learning, planning, and reinforcement learning. Sherry initiated and led the Foundation Models for Decision Making workshop at NeurIPS 2022 and 2023, bringing together research communities in vision, language, planning, and reinforcement learning to solve complex decision making tasks at scale.  Before her current role, Sherry received her Bachelor’s degree and Master’s degree from MIT advised by Patrick Winston and Julian Shun.

    Host: Dani Yogatama

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Thesis Proposal - Yuzhong Huang

    Tue, Mar 19, 2024 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee Members: Fred Morstatter (Chair), Yue Wang, Aiichiro Nakano, & Antonio Ortega
     
    Date & Time: Tuesday, March 19, 2024, 11:00 a.m. - 12:00 p.m. (PST) - PHE 325   
     
    Title: Explicit Control in the Understanding and Generation of 3D world
     
    Abstract: Understanding and recreating our living environment has been a key topic in scientific research, ranging from virtual reality, autonomous driving, and generative AI tools. Recent advancements have significantly improved machine model’s capability to recognize and generate visually similar 3D objects. However, existing approaches often lack explicit control mechanisms, limiting their adaptability and interpretability. This thesis proposal addresses this gap by focusing on three crucial aspects: (1) Explicit control in understanding 3D worlds, achieved through the imposition of planar priors and plane-splatting volume rendering method. (2) Explicit control in generating 3D worlds, enabled by an orientation-conditioned diffusion model. (3) Explicit control in modifying 3D objects, enabled by projecting text-guided 2D segmentation map onto 3D models. These advancements pave the way for more intuitive and precise manipulation of 3D environments.
     
    Zoom Link: https://usc.zoom.us/j/99330288526

    Location: Charles Lee Powell Hall (PHE) - 325

    WebCast Link: https://usc.zoom.us/j/99330288526

    Audiences: Everyone Is Invited

    Contact: Thomas Lord Department of Computer Science

    Event Link: https://usc.zoom.us/j/99330288526

    OutlookiCal
  • PhD Thesis Proposal- Yuzhong Huang

    Tue, Mar 19, 2024 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Student Activity


    PhD Thesis Proposal- Yuzhong Huang

    Title: Explicit Control in the Understanding and Generation of 3D world
    Committee Members: Fred Morstatter (Chair), Yue Wang, Aiichiro Nakano, Antonio Ortega
     


    Abstract: 


    Understanding and recreating our living environment has been a key topic in scientific research, ranging from virtual reality, autonomous driving, and generative AI tools. Recent advancements have significantly improved machine model’s capability to recognize and generate visually similar 3D objects.


    However, existing approaches often lack explicit control mechanisms, limiting their adaptability and interpretability. This thesis proposal addresses this gap by focusing on three crucial aspects: (1) Explicit control in understanding 3D worlds, achieved through the imposition of planar priors and plane-splatting volume rendering method. (2) Explicit control in generating 3D worlds, enabled by an orientation-conditioned diffusion model. (3) Explicit control in modifying 3D objects, enabled by projecting text-guided 2D segmentation map onto 3D models.


    These advancements pave the way for more intuitive and precise manipulation of 3D environments.

    Location: Charles Lee Powell Hall (PHE) - 325

    Audiences: Everyone Is Invited

    Contact: Yuzhong Huang

    Event Link: https://usc.zoom.us/j/99330288526

    OutlookiCal
  • CS Colloquium: Mengyuan Li - Confidential Computing and Trusted Execution Environment: Challenges, Opportunities, and the Future

    Wed, Mar 20, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Mengyuan Li, MIT

    Talk Title: Confidential Computing and Trusted Execution Environment: Challenges, Opportunities, and the Future

    Abstract: Confidential Computing, or Trusted Execution Environment (TEE), represents a cutting-edge design in server-grade CPUs. This technology acts as a protective shield for cloud tasks, safeguarding the confidentiality and integrity of cloud workloads against a range of threats, including attacks from privileged software, physical attackers, and untrustworthy hypervisors. As the demand for secure private data handling continues to rise, the adoption of Confidential Computing has become widespread across various industries. Evidence of this includes the adoption of TEE in server-grade CPUs from major vendors like Intel, AMD, and ARM. Furthermore, leading cloud service providers, such as AWS, Google Cloud, Microsoft Azure, and IBM Cloud, now offer commercial Confidential Computing services.   In this talk, I will outline my contributions to the study of complex, heterogeneous Confidential Computing systems. I will share my insights into two real-world vulnerabilities we uncovered within commercial Confidential Computing systems, along with our joint efforts with CPU manufacturers to address these issues in the latest server-grade CPUs. At the hardware design level, I will discuss a novel ciphertext side-channel attack targeting hardware-accelerated memory encryption, which is a crucial hardware feature to protect the memory of cloud workloads. Moving to the software system design level, I will illustrate how inadequately designed TEE operating systems can pose a threat to the security of Confidential VMs. Finally, I will outline my ongoing efforts and future directions in enhancing the security and effectiveness of Confidential Computing and my research vision towards building secure and performant hardware systems.        This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Mengyuan Li is a postdoctoral researcher at CSAIL, MIT, under the guidance of Prof. Mengjia Yan. His research focuses on bringing security and trust to hardware systems, with a recent concentration in Confidential Cloud Computing and Trusted Execution Environments. To this end, he has identified real-world hardware vulnerabilities in commodity CPUs, which have been acknowledged by manufacturers through hardware CVEs and several security bulletins. Additionally, he has collaborated closely with industry teams such as AMD, Intel, WolfSSL, and Alibaba Cloud to develop mitigations and design commercial trustworthy hardware systems. His research findings have been published in top security and privacy venues, including S&P, Usenix Security, and CCS, and have been recognized by the CCS 2021 Best Paper Runner-up Award. Before MIT, Mengyuan earned his Ph.D. in Computer Science and Engineering from The Ohio State University (OSU) in 2022.

    Host: Seo Jin Park

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • Computer Science General Faculty Meeting

    Wed, Mar 20, 2024 @ 12:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Bi-Weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Location: Hedco Neurosciences Building (HNB) - 107

    Audiences: Invited Faculty Only

    Contact: Assistant to CS Chair

    OutlookiCal
  • CS Colloquium: Andrew Ilyas - Making machine learning predictably reliable

    Thu, Mar 21, 2024 @ 10:00 AM - 11:00 AM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Andrew Ilyas, MIT

    Talk Title: Making machine learning predictably reliable

    Abstract: Despite ML models' impressive performance, training and deploying them is currently a somewhat messy endeavor. But does it have to be? In this talk, I overview my work on making ML “predictably reliable”---enabling developers to know when their models will work, when they will fail, and why.To begin, we use a case study of adversarial inputs to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a line of work that aims to develop a precise understanding of the ML pipeline, combining statistical tools with large-scale experiments to characterize the role of each individual design choice: from how to collect data, to what dataset to train on, to what learning algorithm to use.   This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Andrew Ilyas is a PhD student in Computer Science at MIT, where he is advised by Aleksander Madry and Constantinos Daskalakis. His research aims to improve the reliability and predictability of machine learning systems. He was previously supported by an Open Philanthropy AI Fellowship.

    Host: Vatsal Sharan

    Location: Olin Hall of Engineering (OHE) - 136

    Audiences: Everyone Is Invited

    Contact: CS Faculty Affairs

    OutlookiCal
  • PhD Thesis Defense - Kushal Chawla

    Thu, Mar 21, 2024 @ 01:30 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Defense - Kushal Chawla  
     
    Title: Computational Foundations for Mixed-Motive Human-Machine Dialogue  
     
    Committee Members:   Gale Lucas (Chair), Jonathan Gratch, Jonathan May, Peter Kim, Maja Mataric  
     
    Abstract:    Success in a mixed-motive interaction demands a balance between self-serving and other-serving behaviors. For instance, in a typical negotiation, a player must balance maximizing their own goals with the goals of their partner so as to come to an agreement. If the player asks for too much, this can push the partner to walk away without an agreement, hence, hurting the outcomes for all the parties involved. Such interactions are ubiquitous in everyday life, from deciding who performs household chores to customer support and high-stakes business deals. Consequently, AI tools capable of comprehending and participating in such mixed-motive or other social influence interactions (such as argumentation or therapy) find broad applications in pedagogy and conversational AI.  
     
    In this thesis, we present our foundational work for enabling mixed-motive human-machine dialogue. I will discuss our progress in three key areas: 1) The design of a novel task and dataset of grounded human-human negotiations that has fueled our investigations into the impact of emotion expression and linguistic strategies, 2) Techniques for mixed motive dialogue systems that learn to strike a balance between self and partner interests, and 3) Promoting a research community for dedicated efforts and discussion in this area.      
     
     
    https://usc.zoom.us/j/96411089883?pwd=WDNuMjF1NDNTTXV5cDdGaWJzOG9Gdz09

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 110

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/j/96411089883?pwd=WDNuMjF1NDNTTXV5cDdGaWJzOG9Gdz09

    OutlookiCal
  • PhD Dissertation Defense - Arvin Hekmati

    Thu, Mar 21, 2024 @ 02:30 PM - 04:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Dissertation Defense - Arvin Hekmati  
     
    Committee:  Prof. Bhaskar Krishnamachari (Chair), Prof. Cauligi Raghavendra, and  Prof. Aiichiro Nakano     
     
    Title: AI-Enabled DDoS Attack Detection in IoT Systems    
     
    Abstract:
    "In this thesis, we develop AI-enabled mechanisms for detecting Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) systems. We introduce a novel, tunable DDoS attack model that emulates benign IoT device behavior using a truncated Cauchy distribution. We investigate these futuristic DDoS attacks that use large numbers of IoT devices and camouflage their attack by having each node transmit at a volume typical of benign traffic. We propose innovative correlation-aware, learning-based frameworks that leverage IoT node correlation data for enhanced detection accuracy. We extensively analyze the proposed architectures by evaluating five different neural network models trained on a dataset derived from a 4060-node real-world IoT system. We observe that long short-term memory (LSTM) and a transformer-based model, in conjunction with the architectures that use correlation information of the IoT nodes, provide higher detection performance than the other models and architectures, especially when the attacker camouflages itself by following benign traffic distribution on each IoT node. We evaluated our findings through practical implementation on a Raspberry Pi-based testbed. In order to address the challenge of leveraging massive IoT device arrays for DDoS attacks, we introduce heuristic solutions for selective correlation information sharing among IoT devices. To overcome the challenge of fixed input limitations in conventional machine learning, we propose a model based on the Graph Convolutional Network (GCN) to manage incomplete data in IoT devices caused by network losses. We introduce various IoT device graph topologies, including Network, Peer-to-Peer, and Hybrid topologies with scenarios of both directed and undirected edges. Our simulations reveal that the Hybrid topology, employing correlation-based peer-to-peer undirected edges, achieves the highest detection performance with at most 2% drop in the performance despite a 50% network connection loss, highlighting the proposed GCN-based model's effectiveness in detecting DDoS attacks under lossy network conditions. Finally, we explore the application of Large Language Models (LLMs) for detecting DDoS attacks and explaining the detection rationale, demonstrating the potential of fine-tuning and few-shot prompt engineering methods to achieve high accuracy and provide insightful detection reasoning."

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Ellecia Williams

    Event Link: https://usc.zoom.us/j/4677088430 

    OutlookiCal