Select a calendar:
Filter May Events by Event Type:
University Calendar
Events for May
-
PhD Thesis Defense - Ehsan Qasemi
Mon, May 01, 2023 @ 10:30 AM - 12:30 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Defense - Ehsan Qasemi
Title: Multi-Modal Preconditioned Inference of Commonsense Knowledge
Committee Members: Muhao Chen, Aiichiro Nakano, Daniel O Leary, Fred Morstatter, Luis Garcia
Abstract: Humans can seamlessly reason with circumstantial preconditions of commonsense knowledge. We understand that a glass is used for drinking water, unless the glass is broken or the water is toxic. Despite state-of-the-art (SOTA) models impressive performance in inferring commonsense knowledge, it is unclear whether they understand the circumstantial preconditions. In this dissertation, I initially propose a novel challenge of reasoning with preconditions attributed to commonsense knowledge, design three tasks based on the challenge in text-only setup, and show there is a significant gap between SOTA language models performance and humans on our tasks. I then use weak supervision in a combination of targeted fine-tuning strategies to improve the language models performance on the preconditioned inference task. Finally, I go beyond the text-only setup and investigate the problem of preconditioned inference in a multi-modal setup when the model is challenged to infer the preconditions from an image.
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/98769460750
-
PhD Dissertation Defense - Wenxuan Zhou
Mon, May 01, 2023 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Dissertation Defense - Wenxuan Zhou
Title: Robust and Generalizable Knowledge Acquisition from Text
Committee members: Muhao Chen (chair), Laurent Itti, Tianshun Sun, Robin Jia, Jonathan May
Abstract: With large amounts of digital text generated every day, it is important to acquire structured knowledge automatically from the text. In my thesis, I will present my work during my Ph.D. from two perspectives: (1) Improving the robustness of knowledge acquisition, especially against bias from training corpus; and (2) building data-efficient knowledge acquisition models in low-resource scenarios, which is important for tasks in high-stake domains. After these, I will discuss challenges and future directions for both (1) and (2).Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/6915039300
-
PhD Thesis Defense - Yu-Chuan Yen
Wed, May 03, 2023 @ 08:30 AM - 10:30 AM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Defense - Yu-Chuan Yen
Title: Constructing an unambiguous user-and-machine-friendly, natural-language protocol specification system
Committee Members: Barath Raghavan, Ramesh Govindan, Murali Annavaram
Abstract: Protocol specification has existed for decades to deliver the design and implementation of numerous protocols.
As the guideline and foundation of diverse advanced systems, the methods to process and compose protocol specification have not changed much despite emerging advanced techniques.
The production of specifications remains labor-intensive and involves rigorous discussion to avoid miscommunication via natural language media. A key reason behind these facts is the existence of ambiguities in natural language articles. Ambiguities could represent an unreasonable sentence, a multiple-meaning sentence, or any under-specified behaviors. However, identification of ambiguities is challenging to be applied in domain specific context. In addition, lack of studies applying advanced natural language processing techniques limits our understanding and practices of improving specification production. Motivated by the above observations, this thesis makes the first steps in introducing and building a prototype system that is user-and-machine-friendly and able to process natural language protocol specification while guaranteeing the ambiguous level of the specification. The contributions are four-fold. Firstly, it applies advanced natural language processing techniques called Combinatory Categorial Grammar to analyze protocol specification texts and identifies ambiguous sentences that could result in buggy implementations. Secondly, it parses unambiguous English specification and generates corresponding executable protocol codes that can interoperate with well-known third party code. Thirdly, it defines protocol behaviors with a math definition and introduces unambiguous configurations. The specification configuration is easy for authors to design and easy to automatically generate corresponding English specification and executable code. Lastly, it categorizes a set of verification rules that are able to assist in filtering unreasonable configurations which could not be turned into pieces of English paragraphs or code blocks
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/2553045376
-
PhD Thesis Proposal - Arvin Hekmati
Wed, May 03, 2023 @ 03:00 PM - 04:30 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Proposal - Arvin Hekmati
Title: Correlation-Aware Neural Networks for DDoS Attack Detection In IoT Systems
Committee Members: Bhaskar Krishnamachari (Chair), Cyrus Shahabi, Aiichiro Nakano, Mohammad Rostami, Cauligi Raghavendra
Abstract: We present a comprehensive study on applying machine learning to detect distributed Denial of service (DDoS) attacks using large-scale Internet of Things (IoT) systems. While prior works and existing DDoS attacks have largely focused on individual nodes transmitting packets at a high volume, we investigate more sophisticated futuristic attacks that use large numbers of IoT devices and camouflage their attack by having each node transmit at a volume typical of benign traffic. We introduce new correlation-aware architectures that take into account the correlation of traffic across IoT nodes, and we also compare the effectiveness of centralized and distributed detection models. We extensively analyze the proposed architectures by evaluating five different neural network models trained on a dataset derived from a 4060-node real-world IoT system. We observe that long short-term memory (LSTM) and a transformer-based model, in conjunction with the architectures that use correlation information of the IoT nodes, provide higher performance (in terms of F1 score and binary accuracy) than the other models and architectures, especially when the attacker camouflages itself by following benign traffic distribution on each transmitting node. For instance, by using the LSTM model, the distributed correlation-aware architecture gives 81 percent F1 score for the attacker that camouflages their attack with benign traffic as compared to 35 percent for the architecture that does not use correlation information. We also investigate the performance of heuristics for selecting a subset of nodes to share their data for correlation-aware architectures to meet resource constraints.
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/92583528716?pwd=S01uOUlYQXU5Z0xudXZXbzgwOE0wQT09
-
PhD Thesis Defense - Leili Tavabi
Wed, May 03, 2023 @ 03:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Defense - Leili Tavabi
Committee Members: Mohammad Soleymani (Chair), Maja Mataric, Shrikanth Narayanan, Stefan Scherer
Title: Computational Modeling of Mental Health Therapy Sessions
Abstract: Despite the growing prevalence of mental health disorders, there is a large gap between the needs and available resources for diagnosis and treatment. The recent advancements in machine learning and deep learning provide an opportunity for developing AI assisted assessment of therapy sessions through automated behavior analysis. In this dissertation, I present multiple approaches for modeling and analyzing client therapist dialogue from real world Motivational Interviews toward efficient and systematic assessment of the sessions. I present models for automatic recognition of client intent on a local utterance level, and quality metrics like therapist empathy at the global session level. I further explore the association of in session behaviors with subsequent outcomes, and provide interpretable insights on psychologically relevant features associated with the modeled constructs
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/96609451060?pwd=YnhUOWxjY0ZCaWFadkR4S2srNmZKZz09
-
PHD Defense - Su Lei
Thu, May 04, 2023 @ 01:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
University Calendar
PHD Defense: Su Lei
Committee: Jonathan Gratch (Chair), Laurent Itti, Shri Narayanan
Abstract: In this dissertation, I innovate automatic facial analysis methods and use them to yield fundamental insights into the source and function of facial expressions in face-to-face social interaction. Facial expressions play an essential role in shaping human social behavior. The ability to accurately recognize, interpret and respond to emotional expressions is a hallmark of human social intelligence, and automating this ability is a key focus of computer science research. Machines that possess this skill could enhance the capabilities of human-machine interfaces, help diagnose social disorders, improve predictive models of human behavior, or serve as methodological tools in social science research. My dissertation focuses on this last application. Specifically, I examine two competing perspectives on the social meaning of facial expressions and show that automated methods can yield novel insights. In terms of technical innovation, I develop novel methods to interpret the meaning of facial expressions in terms of facial expressivity. Within computer science, facial expression analysis has been heavily influenced by the "basic emotion theory" which claims that expressions reflect the activation of a small number of discrete emotions (e.g., joy, hope, or fear). Thus, automatic emotion recognition methods seek to classify facial displays into these discrete categories to form insights into how an individual is interpreting a situation and what they will do next. However, more recent psychological findings have largely discredited this theory, highlighting that people show a wide range of idiosyncratic expressions in response to the same event. Motivated by this more recent research, I develop supervised machine learning models to automatically measure perceived expressivity from video data. In terms of theoretical innovation, I demonstrate how automatic expressivity recognition yields insight into alternative psychological theories on the nature of emotional expressions in social tasks by analyzing a large corpus of people engaged in the iterated prisoner's dilemma task. This is a canonical task used to test theories of social cognition and the function of facial expressions. First, I explore the appraisal perspective which claims that expressions reflect an individual's appraisal of how actions within a social task relate to their goals. I find that by analyzing facial expressions produced by participants, a computer can reliably predict how actions in the task impact participants' appraisals (specifically, we predict if the action was unexpected). Further, we show that automatic expressivity recognition dramatically improves the accuracy of these predictions over traditional emotion recognition. This lends support to the theory that expressions are, in a sense, directly caused by the social task. Second, I explore a contrasting perspective, interpersonal-dynamics theory, which argues that expressions are, in a sense, directly caused by the partner's expressions. This perspective emphasizes processes such as synchrony, mimicry, and contagion to explain moment-to-moment expressions. The appraisal perspective counters that any observed synchrony simply reflects a shared appraisal of social actions. I use automatic expressivity recognition to contrast these perspectives. Specifically, I analyze synchrony in two experimental conditions: a "still" condition where dyads see only a still image of their partner, and a "video" condition with real-time visual access to their partner's facial reactions. Using Dynamic Time Warping, I evaluate synchrony in both real and randomly paired dyads. Results reveal that synchrony exists even without visual cues, suggesting that shared appraisals contribute to synchrony, but that synchrony significantly increases when the partner is visible. This suggests that both perspectives must be integrated to best explain facial displays. In conclusion, both appraisal and interpersonal-dynamics perspectives reinforce the significance of emotional expressivity in interpreting facial displays and fostering social coordination in cooperative and competitive contexts. These insights offer valuable contributions to affective computing and the understanding of social interaction mechanisms. I also discuss potential limitations and future research directions for further exploring the complexities of social interactions.Location: https://usc.zoom.us/j/6448851979?pwd=TThsRC96Vk9KZnVLV0RIc1g5NGVuQT09
Audiences: Everyone Is Invited
Contact: Asiroh Cham
-
PhD Thesis Proposal - Mehrnoosh Mirtaheri
Mon, May 08, 2023 @ 02:00 PM - 03:30 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Proposal - Mehrnoosh Mirtaheri
Committee members: Aram Galstyan, Mohammad Rostami, Fred Morstatter, Cyrus Shahabi, Antonio Ortega
Title: Scalable Graph-Based Models for Temporal Knowledge Graphs: Learning, Applications
Abstract: Temporal knowledge graphs (TKGs) have emerged as a powerful tool for modeling relationships between entities in large raw text datasets. By capturing and representing these relationships in a structured, interpretable format, TKGs enable the extraction of valuable insights from vast amounts of unstructured information. Knowledge graphs allow for the identification of patterns and trends over time, enhancing our understanding of evolving connections and interactions between various entities. Moreover, they facilitate complex reasoning tasks, question answering, and data driven decision making by offering a more comprehensive view of the relationships found within the text.
This thesis focuses on developing various models to address different challenges associated with TKG completion, such as data scarcity, scalability, and continuously evolving data. By tackling these challenges, the proposed models aim to enhance the capabilities of TKGs for analyzing and processing complex relationships within large scale text data. As a result, they enable more accurate and effective knowledge extraction and representation. The advancements presented in this thesis can greatly benefit a wide range of applications that rely on understanding the underlying structure of relationships in massive raw text datasets.
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/99893841028?pwd=RlhVd29VcTltdnFCRW54dHc3ZjhrZz09
-
PhD Thesis Proposal - Yufeng Yin
Mon, May 08, 2023 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Proposal - Yufeng Yin
Committee Members: Mohammad Soleymani (chair), Jonathan Gratch, Mayank Kejriwal, Lynn Miller, Maja Mataric, and Xuezhe Ma
Title: Towards Generalizable Facial Expression and Emotion Recognition
Abstract: Facial expression and emotion recognition are critical components of human behavior understanding. However, the performance of automatic recognition methods degrades when evaluated across datasets or subjects, due to variations in humans and environmental factors. The manual coding required by supervised methods also presents significant practical limitations since they are not feasible when working with new datasets or individuals.
In this thesis proposal, we investigate how to improve the generalization ability of the perception model through representation learning and synthetic data generation with minimal human efforts. (i) We explore unsupervised domain adaptation (UDA) approaches to obtain domain invariant and discriminative features without any target labels. The experiments show that UDA can effectively reduce the domain gap between datasets or subjects and improve model cross corpus performance for emotion recognition. (ii) We explore approaches for synthetic data generation to address the problems of the scarcity of labeled data and the diversity of subjects. Our results indicate that synthetic data can not only improve action unit (AU) detection performance but also fairness across genders, demonstrating its potential to solve AU detection in the wild. We will also discuss our future work involving unsupervised personalization on unseen speakers for emotion recognition through feature representation learning and label distribution calibration. Our proposed methods enhance model recognition accuracy and generalization ability to unseen subjects and datasets, paving the way for more effective human behavior analysis in a variety of applications.
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/94638614965?pwd=c0ozL09VVjVBNmNwRmQ4NTAybWwzdz09
-
PhD Defense - Isabel Rayas
Mon, May 08, 2023 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Candidate Defense: Isabel Rayas
In-person: RTH 306
Zoom: https://usc.zoom.us/j/95235693966?pwd=cE92UC8zejROMi8yYytyT3F5YnY1UT09
Committee:
Gaurav Sukhatme (Chair), David Caron, Stefanos Nikolaidis
Title: Advancing Robot Autonomy for Long-Horizon Tasks
Abstract:
Autonomous robots have real-world applications in diverse fields, such as mobile manipulation and environmental exploration, and many such tasks benefit from a hands-off approach in terms of human user involvement over a long task horizon. However, the level of autonomy achievable by a deployment is limited in part by the problem definition or task specification required by the system. Task specifications often require technical, low-level information that is unintuitive to describe and may result in generic solutions, burdening the user technically both before and after task completion. In this thesis, we aim to advance task specification abstraction toward the goal of increasing robot autonomy in real-world scenarios. We do so by tackling problems that address several different angles of this goal. First, we develop a way for the automatic discovery of optimal transition points between subtasks in the context of constrained mobile manipulation, removing the need for the human to hand-specify these in the task specification. We further propose a way to automatically describe constraints on robot motion by using demonstrated data as opposed to manually-defined constraints. Then, within the context of environmental exploration, we propose a flexible task specification framework, requiring just a set of quantiles of interest from the user that allows the robot to directly suggest locations in the environment for the user to study. We next systematically study the effect of including a robot team in the task specification and show that multirobot teams have the ability to improve performance under certain specification conditions, including enabling inter-robot communication. Finally, we propose methods for a communication protocol that autonomously selects useful but limited information to share with the other robots.
Location: Ronald Tutor Hall of Engineering (RTH) - 306
Audiences: Everyone Is Invited
Contact: Asiroh Cham
-
PhD Defense - Michiel De Jong
Mon, May 08, 2023 @ 03:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: EXPANDING THE QUALITY-COMPUTE FRONTIER FOR RETRIEVAL-AUGMENTED LANGUAGE MODELS
Abstract: Retrieval-augmented language models set the state-of-the-art on a broad spectrum of knowledge-intensive tasks, outperforming orders of magnitude larger models. However, such models can also be expensive for training and inference. Model performance and computational cost represent two sides of the coin: we can generally improve performance through scale at the expense of an increased computational burden. Therefore, we are really interested in pushing out the quality-compute frontier, improving performance at any given level of computational resources.
In this dissertation, I analyze the factors that determine the computational burden of retrieval-augmented language models and propose strategies to extract a better performance-compute trade-off. The dissertation consists of three sections. The first section contains a detailed analysis of components of retrieval-augmented models and introduces methods to improve generation efficiency. The second section explores the use of dense memory to reduce the cost of encoding retrievals. Finally, the third section proposes a hybrid between dense memory and text retrieval, combining lessons from previous chapters.
Names of the Dissertation defense committee members:
Chair: Leana Golubchik
Members:
Fei Sha
Dani Yogatama
Jacob Bien
Venue: Zoom, https://usc.zoom.us/my/lgzoomeeting
Location: https://usc.zoom.us/my/lgzoomeeting
Audiences: Everyone Is Invited
Contact: Asiroh Cham
-
PhD Dissertation Defense - Zimo Li
Tue, May 09, 2023 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Dissertation Defense - Zimo Li
Committee Members: Andrew Nealen, Laurent Itti, Stefanos Nikolaidis, Mike Zyda
Title: Human Appearance and Performance Synthesis Using Deep Learnin
Abstract: Synthesis of human performances is a highly sought after technology in the entertainment industry. In this dissertation, we will go over several new deep learning solutions which tackle the problems of human facial and body performance synthesis.
Facial performance synthesis is a complex multistep graphics problem. First, the target performance to be modified must be tracked and captured accurately. Then, based on the desired modification (whether to change the identity, facial expressions, or both), a modified source performance must be synthesized or captured from a different actor. Finally, the original facial performance must be removed and replaced with the synthesized one. This multistep process poses many unique challenges. Using conventional CG tracking and retargeting of expressions from the source to target using a 3D mesh and static texture will give an undesired rubbery skin effect. Furthermore, inaccuracies in the expression tracking of the source performance using a blendshape model will result in the uncanny valley effect in the output performance. It is often necessary to use costly capture methods, such as a Light Stage, to obtain highly accurate 3D captures and dynamic textures of a source performance in order to avoid these pitfalls. Even then, final modified performances are often uncanny.
When dealing with human body to motion synthesis, creating new motions often requires manual artist animations, tracking new motions on an actor, or stitching together subsequences of previous animations. These methods are limited by cost, or are not able to generate appreciably novel motions.
Over the last several years, the advancement of AI based generation techniques have let us address many of these issues. In this thesis, we will go over several novel techniques which reduce the cost (time, money, ease-of-access), and improve the quality of facial reenactment, as well as body motion synthesis, pipelines. The applications of these techniques allow us to tackle new problem settings in an efficient way.
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://us05web.zoom.us/j/86385849747?pwd=V2lwR2FXekI5WVpNMGU0bWF5clJIQT09
-
Innovation For Defense Applications Showcase
Tue, May 09, 2023 @ 04:30 PM - 06:30 PM
Viterbi Technology Innovation and Entrepreneurship
University Calendar
You are invited to join us for the Innovation For Defense Applications team presentations showcase. This semester we have teams that have worked on various problems sets for their Department of Defense sponsors.
The event will be held on the USC campus at the Ronald Tutor Hall (RTH) in room 526. Doors will open at 4:30 pm and will include light refreshments at the event.
If you can not attend in person, we will also provide a ZOOM link for a virtual option.
RSVP
Location: Ronald Tutor Hall of Engineering (RTH) - 526
Audiences: Everyone Is Invited
Contact: Johannah Murray
Event Link: https://forms.gle/EZP7rh2y4uPHMcne7
-
Viterbi Startup Garage: Founding & Funding Deep Tech Companies
Wed, May 17, 2023 @ 12:00 PM - 01:00 PM
Viterbi Technology Innovation and Entrepreneurship
University Calendar
Viterbi Startup Garage: Founding & Funding Deep Tech Companies
Who: Joe Wilson Managing Partner at Undeterred Capital
What: Companies face challenges when they are seeking to commercialize technological breakthroughs. How do these companies attract funding and how are they scaled over time?
When: Wed, May 17, 2023 (12-1 PM PT)
Where: Zoom
(Register for Zoom Link)
Location: Zoom
Audiences: Everyone Is Invited
Contact: VSG
Event Link: https://vsg-events.my.canva.site/vsg-oceanside-chat-may-17-2023
-
PhD Dissertation Defense - Heramb Nemlekar
Thu, May 18, 2023 @ 11:30 AM - 01:30 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Dissertation Defense - Heramb Nemlekar
Committee: Gaurav Sukhatme, Heather Culbertson, Jyotirmoy Deshmukh, Satyandra K. Gupta, Stefanos Nikolaidis (Chair)
Title: Efficiently Learning Human Preferences for Proactive Robot Assistance in Assembly Tasks
Abstract:
Robots that support humans in collaborative tasks need to adapt to the individual preferences of their human partners efficiently. While prior work has mainly focused on learning human preferences from demonstrations in the actual task, obtaining this data can be expensive in real world settings such as assembly and manufacturing. Thus, this dissertation proposes leveraging prior knowledge of (i) similarities in the preferences of different users in a given task and (ii) similarities in the preferences of a given user in different tasks for efficient robot adaptation. Firstly, to leverage similarities between users, we propose a two stage approach for clustering user demonstrations to identify the dominant models of user preferences in complex assembly tasks. This allows assistive robots to efficiently infer the preferences of new users by matching their actions to a dominant preference model. We evaluate our approach in an IKEA assembly study and show that it can improve the accuracy of predicting user actions by quickly inferring the user preference. Next, to leverage similarities between tasks, we propose learning user preferences as a function of task agnostic features (e.g., the mental and physical effort of user actions) from demonstrations in a short canonical task and transferring the preferences to the actual assembly. Obtaining demonstrations in a canonical task requires less time and human effort, allowing robots to learn user preferences efficiently. In a user study with a manually designed canonical task and an actual task of assembling a model airplane, we observe that our approach can predict user actions in the actual assembly based on the task agnostic preferences learned in the canonical task. We extend our approach to account for users that change their preferences when switching tasks, by updating the transferred user preferences during the actual task. In a human to robot assembly study, we demonstrate how an assistive robot can adapt to the changing preferences of users and proactively support them, thereby reducing their idle time and enhancing their collaborative experience. Lastly, we propose a method to automatically select a canonical task suitable for the transfer learning of human preferences based on the expressiveness of the task. Our experiments show that transferring user preferences from a short but expressive canonical task improves the accuracy of predicting user actions in longer actual tasks. Overall, this dissertation proposes and evaluates novel approaches for efficiently adapting to human preferences, which can enhance the productivity and satisfaction of human workers in real-world assemblies.
Location: Henry Salvatori Computer Science Center (SAL) - 213
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/91591350584?pwd=a2lRcE9peGFCeFBLa05sRW1vT25UUT09