Select a calendar:
Filter April Events by Event Type:
Events for April 25, 2024
-
Incident Investigation/Analysis IIA 24-2
Thu, Apr 25, 2024 @ 08:00 AM - 04:00 PM
Aviation Safety and Security Program
University Calendar
This course is designed for managers and supervisors who may be required to investigate, implement or review safety findings and recommendations resulting from aviation incidents. The course presents the principles of Management, Investigation and Analysis. It will explain how incidents are discovered, investigated, and reported in writing. The student will learn the techniques of data collection and analysis.
Location: Century Boulevard Building (CBB) - 920
Audiences: Everyone Is Invited
Contact: Daniel Scalese
Event Link: https://avsafe.usc.edu/wconnect/CourseStatus.awp?&course=24AIIA2
-
Human Factors in Aviation Maintenance
Thu, Apr 25, 2024 @ 08:00 AM - 04:00 PM
Aviation Safety and Security Program
University Calendar
This course is designed to provide knowledge and understanding of human factors in the realm of aviation safety with a focus on the role of the maintainer. It presents human factors issues as conditions/hazards that must be managed. Specific issues such as fatigue management, deviations from approved procedures, situation awareness, and the Dirty Dozen are presented. Data collection methodologies such as MEDA and LOSA are examined as viable safety information methods and hazard identification tools in an organization’s SMS. This course satisfies the Human Factors Course requirement for the USC Safety & Security Certificate.
Location: Century Boulevard Building (CBB) - 960
Audiences: Everyone Is Invited
Contact: Daniel Scalese
Event Link: https://avsafe.usc.edu/wconnect/CourseStatus.awp?&course=24AHFMX2
-
Career & Internship Bootcamp
Thu, Apr 25, 2024 @ 10:00 AM - 04:00 PM
Viterbi School of Engineering Career Connections
Receptions & Special Events
Location: Michelson Center for Convergent Bioscience (MCB) -
Audiences: All Viterbi BS, MS Students
Contact: RTH 218 Viterbi Career Connections
-
PhD Thesis Proposal - Navid Hashemi
Thu, Apr 25, 2024 @ 10:30 AM - 12:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Verification and Synthesis of Controllers for Temporal Logic Objectives Using Neuro-Symbolic Methods
Committee Members: Jyotirmoy Deshmukh (Chair), Guarav Sukhatme, Chao Wang, Pierlggi Nuzzo, Lars Lindemann, Georgios Fainekos (External Member)
Date & Time: Thursday, April 25th, 10:30am - 12:00pm
Abstract: As the field of autonomy is embracing the use of neural networks for perception and control, Signal Temporal Logic (STL) has emerged as a popular formalism for specifying the task objectives and safety properties of such autonomous cyber-physical systems (ACPS). There are two important open problems in this research area: (1) how can we effectively train neural controllers in such ACPS applications, when the state dimensionality is high and when the task objectives are specified over long time horizons, and (2) how can we verify if the closed-loop system with a given neural controller satisfies given STL objectives. We review completed work in which we show how discrete-time STL (DT-STL) specifications lend themselves to a smooth neuro-symbolic encoding that enables the use of gradient-based methods for control design. We also show how a type of neuro-symbolic encoding of DT-STL specifications can be combined with neural network verification tools to provide deterministic guarantees. We also review how neural network encoding of the environment dynamics can help us combine statistical verification techniques with formal techniques for reachability analysis. We will then propose several directions that we will pursue in the future: (1) We will investigate if our neuro-symbolic encoding approach can extend to other temporal logics, especially those used for specifying properties of perception algorithms (such as Spatio-Temporal Perception Logic or STPL). Our idea is to use a neuro-symbolic encoding of STPL to improve the quality of outputs produced by perception algorithms. (2) We will investigate how control policies generated by our existing algorithms can be made robust to distribution shifts through online and offline techniques. (3) Finally, we will propose scaling our synthesis approaches to higher-dimensional observation spaces and longer horzon tasks. We conclude with the timeline to finish proposed work and write the dissertation.Location: Ronald Tutor Hall of Engineering (RTH) - 306
Audiences: Everyone Is Invited
Contact: Felante' Charlemagne
-
NL Seminar-How to Steal ChatGPTs Embedding Size, and Other Low-rank Logit Tricks
Thu, Apr 25, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Matt Finlayson, USC
Talk Title: How to Steal ChatGPTs Embedding Size, and Other Low-rank Logit Tricks
Series: NL Seminar
Abstract: The commercialization of large language models (LLMs) has led to the common practice of restricting access to proprietary models via a limited API. In this work we show that, with only a conservative assumption about the model architecture, it is possible to learn a surprisingly large amount of non-public information about an API-protected LLM from a relatively small number of API queries (e.g., costing under $1000 USD for OpenAI’s gpt-3.5-turbo). Our findings are centered on one key observation: most modern LLMs suffer from a softmax bottleneck, which restricts the model outputs to a linear subspace of the full output space. We exploit this fact to unlock several capabilities, including (but not limited to) obtaining cheap full-vocabulary outputs, auditing for specific types of model updates, identifying the source LLM given a single full LLM output, and even efficiently discovering the LLM’s hidden size. Our empirical investigations show the effectiveness of our methods, which allow us to estimate the embedding size of OpenAI’s gpt-3.5-turbo to be about 4096. Lastly, we discuss ways that LLM providers can guard against these attacks, as well as how these capabilities can be viewed as a feature (rather than a bug) by allowing for greater transparency and accountability. *Meeting hosts only admit on-line guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please inform us at (nlg-seminar-host(at)isi.edu) to make us aware of your attendance so we can admit you. Specify if you will attend remotely or in person at least one business day prior to the event Provide your: full name, job title and professional affiliation and arrive at least 10 minutes before the seminar begins. If you do not have access to the 6th Floor for in-person attendance, please check in at the 10th floor main reception desk to register as a visitor and someone will escort you to the conference room location.
Biography: Matthew Finlayson is a PhD student studying NLP at the University of Southern California. Previously he was a predoctoral researcher at the Allen Institute for AI (AI2) after completing his bachelors degree in computer science and linguistics at Harvard University. Matthew is interested in the practical consequences of the architectural design of language models, from security to generation, as well as understanding how language models learn and generalize from data.
Host: Jon May and Justin Cho
More Info: https://www.isi.edu/research-groups-nlg/nlg-seminars/
Webcast: https://www.youtube.com/watch?v=3U9nA-l2YAsLocation: Information Science Institute (ISI) - Conf Rm#689
WebCast Link: https://www.youtube.com/watch?v=3U9nA-l2YAs
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/research-groups-nlg/nlg-seminars/
-
Phd Dissertation Defence - Haidong Zhu
Thu, Apr 25, 2024 @ 12:00 PM - 02:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Shape-Assisted Multimodal Person Re-Identification
Committee Members: Ram Nevatia (Chair), Ulrich Neumann, Antonio Ortega
Date & Time: Thursday, April 25th, 12:00pm - 2:00pm
Abstract: Recognizing an individual's identity across non-overlapping images or videos, known as person re-identification, is a fundamental yet challenging task for biometric analysis. This task involves extracting and distinguishing unique features such as appearance, gait, and body shape to accurately identify individuals. Different from other representations, 3-D shape complements the body information with external human body shape prior and enhances the appearance captured in the 2-D images. Although 3-D body shape offers invaluable external shape-related information that 2-D images lack, existing body shape representations often fall short in accuracy or demand extensive image data, which is unavailable for re-identification tasks. We explore various biometric representations for comprehensive whole-body person re-identification, with a particular emphasis on leveraging 3-D body shape. We focus on enhancing the detail and few-shot learning capabilities of 3-D shape representations through the application of implicit functions and generalizable Neural Radiance Fields (NeRF). Moreover, we propose the use of 3-D body shape for alignment and supervision during training, aiming to advance the accuracy and efficiency of person re-identification techniques.Location: Hughes Aircraft Electrical Engineering Center (EEB) - 110
Audiences: Everyone Is Invited
Contact: Haidong Zhu
-
School of Advanced Computing: Computational Science Distinguished Seminar Series- Towards large scale quantum accuracy materials simulations.
Thu, Apr 25, 2024 @ 02:00 PM - 03:00 PM
USC School of Advanced Computing, USC Viterbi School of Engineering
Conferences, Lectures, & Seminars
Speaker: Vikram Gavini, University of Michigan
Talk Title: Towards large scale quantum accuracy materials simulations
Series: Computational Science Distinguished Seminar Series
Abstract: Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. Despite the wide adoption of DFT, and the tremendous progress in theory and numerical methods over the decades, the following challenges remain. Firstly, many widely used implementations of DFT suffer from domain-size and geometry restrictions, limiting the complexity of materials systems that can be treated using DFT calculations. Secondly, there are many materials systems (such as strongly-correlated systems) where the widely used model exchange-correlation functionals in DFT, which account for the many-body quantum mechanical interactions between electrons, are not sufficiently accurate. This talk will discuss the recent advances towards addressing the aforementioned challenges, which provides a path for large-scale quantum accuracy materials simulations. In particular, the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization will be presented, which form the basis for the recently released DFT-FE open-source code. The computational efficiency, scalability and performance of DFT-FE will be presented, which can compute the electronic structure of systems containing many thousands of atoms in wall-times of few minutes. Some recent studies on the energetics of quasicrystals (ScZn 7.33 ) and dislocations in Mg using DFT-FE will be presented, which highlight the complex systems that can be tackled using DFT-FE. In addressing the second challenge, our recent progress in bridging highly accurate quantum many-body methods with DFT will be discussed, which is achieved by computing and using exact exchange-correlation potentials to improve the exchange correlation functional description in DFT.
Biography: Vikram Gavini is Professor of Mechanical Engineering and Materials Science & Engineering at the University of Michigan. He received his Ph.D. from California Institute of Technology in 2007. His interests are in developing methods for large-scale and quantum-accurate electronic structure calculations, numerical analysis of PDEs and scientific computing. DFT-FE, a massively parallel open-source code for large-scale real-space DFT calculations, has been developed in his group. He is the recipient of NSF CAREER Award in 2011, AFOSR Young Investigator Award in 2013, Humboldt Research Fellowship for Experienced Researchers (2012-14), USACM Gallagher Award in 2015, among others. He led the team that received the 2023 ACM Gordon Bell Prize in high performance computing.
More Info: https://sac.usc.edu/events/
Location: Ronald Tutor Hall of Engineering (RTH) - 526
Audiences: Everyone Is Invited
Contact: Raymond USC Viterbi
Event Link: https://sac.usc.edu/events/
-
PhD Dissertation Defense - Zhaoheng Zheng
Thu, Apr 25, 2024 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Incorporating Large-Scale Vision-Language Corpora in Visual Understanding
Committee Members: Ram Nevatia (Chair), Mohammad Soleymani, Keith Jenkins
Date and Time: Thursday, April 25th, 2:00pm - 4:00pm
Abstract: As key mediators of human perception, vision and language corpora act as critical roles in the development of modern Artificial Intelligence (AI). The size of vision-language corpora has scaled up rapidly in recent years, from thousands to billions, enabling the creation of large foundation models. However, as an emerging concept, there are a series of problems yet to be explored.
We start with a study of compositional learning from pre-VLM times to the post-VLM era. We introduce a representation blending approach that creates robust features for compositional image classification and a two-stream architecture that tackles the entanglement in the feature space of the object-attribute detection problem with novel object-attribute pairs. We further design an adaptation approach to leverage CLIP encoders for compositional image classification.
The second part covers a variety of methods built with multimodal transformer models. For image retrieval, we propose a framework that assembles multimodal inputs into sequences with which a multimodal transformer encoder can be fine-tuned. The pre-training of vision-language models (VLMs) is also explored. Specifically, we introduce a fractional intermediate tower that improves the feature expressibility of dual-tower vision-language models. We further design a unified pipeline that allows a VLM to learn from not only vision-language corpora but unimodal visual and linguistic data.
Lastly, we study how to leverage the knowledge of Large Language Models (LLMs) for low-shot image classification, in a data- and computation-efficient way.
Zoom Link: https://usc.zoom.us/j/96814169370?pwd=NkhSYWFKNCsya0lyaUFBVlVDQkI3Zz09Location: Hughes Aircraft Electrical Engineering Center (EEB) - 110
Audiences: Everyone Is Invited
Contact: Zhaoheng Zheng
Event Link: https://usc.zoom.us/j/96814169370?pwd=NkhSYWFKNCsya0lyaUFBVlVDQkI3Zz09