Select a calendar:
Filter May Events by Event Type:
Events for May 09, 2024
-
Aircraft Accident Investigation AAI 24-4
Thu, May 09, 2024 @ 08:00 AM - 04:00 PM
Aviation Safety and Security Program
University Calendar
The course is designed for individuals who have limited investigation experience. All aspects of the investigation process are addressed, starting with preparation for the investigation through writing the final report. It covers National Transportation Safety Board and International Civil Aviation Organization (ICAO) procedures. Investigative techniques are examined with an emphasis on fixed-wing investigation. Data collection, wreckage reconstruction, and cause analysis are discussed in the classroom and applied in the lab.
The USC Aircraft Accident Investigation lab serves as the location for practical exercises. Thirteen aircraft wreckages form the basis of these investigative exercises. The crash laboratory gives the student an opportunity to learn the observation and documentation skills required of accident investigators. The wreckage is examined and reviewed with investigators who have extensive actual real-world investigation experience. Examination techniques and methods are demonstrated along with participative group discussions of actual wreckage examination, reviews of witness interview information, and investigation group personal dynamics discussions.Location: WESTMINSTER AVENUE BUILDING (WAB) - Unit E
Audiences: Everyone Is Invited
Contact: Daniel Scalese
Event Link: https://avsafe.usc.edu/wconnect/CourseStatus.awp?&course=24AAAI4
-
NL Seminar-Event Extraction for Epidemic Prediction
Thu, May 09, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Tanmay Parekh, UCLA
Talk Title: Event Extraction for Epidemic Prediction
Series: NL Seminar
Abstract: *Meeting hosts only admit on-line guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please inform us at (nlg-seminar-host(at)isi.edu) to make us aware of your attendance so we can admit you. Specify if you will attend remotely or in person at least one business day prior to the event Provide your: full name, job title and professional affiliation and arrive at least 10 minutes before the seminar begins. If you do not have access to the 6th Floor for in-person attendance, please check in at the 10th floor main reception desk to register as a visitor and someone will escort you to the conference room location. Tanmay Parekh is a third-year PhD student in Computer Science at the University of California Los Angeles (UCLA). He is advised by Prof. Nanyun Peng and Prof. Kai-Wei Chang. Previously, he completed his Masters at the Language Technologies Institute at Carnegie Mellon University (CMU) where he worked with Prof. Alan Black and Prof. Graham Neubig. He has completed his undergraduate studies at the Indian Institute of Technology Bombay (IITB). He has also worked in the industry at Amazon and Microsoft. He has worked on a wide range of research topics in multilingual, code-switching, controlled generation, and speech technologies. His current research focuses on improving the utilization and generalizability of Large Language Models (LLMs) for applications in Information Extraction (specifically Event Extraction) across various languages and domains.
Biography: Early warnings and effective control measures are among the most important tools for policymakers to be prepared against the threat of any epidemic. Social media is an important information source here, as it is more timely than other alternatives like news and public health and is publicly accessible. Given the sheer volume of daily social media posts, there is a need for an automated system to monitor social media to provide early and effective epidemic prediction. To this end, I introduce two works to aid the creation of such an automated system using information extraction. In my first work, we pioneer exploiting Event Detection (ED) for better preparedness and early warnings of any upcoming epidemic by developing a framework to extract and analyze epidemic-related events from social media posts. We curate an epidemic event ontology comprising seven disease-agnostic event types and construct a Twitter dataset SPEED focused on the COVID-19 pandemic. Experimentation reveals how ED models trained on COVID-based SPEED can effectively detect epidemic events for three unseen epidemics of Monkeypox, Zika, and Dengue. Furthermore, we show that reporting sharp increases in the extracted events by our framework can provide warnings 4-9 weeks earlier than the WHO epidemic declaration for Monkeypox. Since epidemics can originate across the globe, social media posts discussing them can be in varied languages. However, training supervised models on every language is a tedious and resource-expensive task. The alternative is the usage of zero-shot cross-lingual models. In this work, we introduce a new approach for label projection that can be used to generate synthetic training data in any language using the translate-train paradigm. This novel approach, CLaP, translates text to the target language and performs contextual translation on the labels using the translated text as the context, ensuring better accuracy for the translated labels. We leverage instruction-tuned language models with multilingual capabilities as our contextual translator, imposing the constraint of the presence of translated labels in the translated text via instructions. We benchmark CLaP with other label projection techniques on zero-shot cross-lingual transfer across 39 languages on two representative structured prediction tasks — event argument extraction (EAE) and named entity recognition (NER), showing over 2.4 F1 improvement for EAE and 1.4 F1 improvement for NER.
Host: Jon May and Justin Cho
More Info: https://www.isi.edu/research-groups-nlg/nlg-seminars/
Webcast: https://www.youtube.com/watch?v=8MPbW2abdKsLocation: Information Science Institute (ISI) - Conf Rm#689
WebCast Link: https://www.youtube.com/watch?v=8MPbW2abdKs
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/research-groups-nlg/nlg-seminars/
-
PhD Thesis Defense - Qinyi Luo
Thu, May 09, 2024 @ 11:00 AM - 02:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Defense - Qinyi (Chelsea) Luo
Committee members: Xuehai Qian (co-chair), Viktor Prasanna (co-chair), Ramesh Govindan, Chao Wang, Feng Qian
Title: High-Performance Heterogeneity-Aware Distributed Machine Learning Model Training
Abstract: The increasing size of machine learning models and the ever-growing amount of data result in days or even weeks of time required to train a machine learning model. To accelerate training, distributed training with parallel stochastic gradient descent is widely adopted as the go-to training method. This thesis targets four challenges in distributed training: (1) performance degradation caused by large amount of data transfer among parallel workers, (2) heterogeneous computation and communication capacities in the training devices, i.e., the straggler issue, (3) huge memory consumption during training caused by gigantic model sizes, and (4) automatic selection of parallelization strategies. This thesis first delves into the topic of decentralized training and proposes system support and algorithmic innovation that strengthen tolerance against stragglers in data-parallel training. On the system side, a unique characteristic of decentralized training, the iteration gap, is identified, and a queue-based synchronization mechanism is proposed to efficiently support decentralized training as well as common straggler-mitigation techniques. In the experiments, the proposed training protocol, Hop, can provide strong tolerance against stragglers and train much faster than standard decentralized training when stragglers are present. On the algorithm side, a novel communication primitive, randomized partial All-Reduce, is proposed to enable fast synchronization in decentralized data-parallel training. The proposed approach, Prague, can achieve a 1.2x speedup against All-Reduce in a straggler-free environment and a 4.4x speedup when stragglers are present. Then, on the topic of memory optimization for training Deep Neural Networks (DNNs), an adaptive during-training model compression technique, FIITED, is proposed to reduce the memory consumption of training huge recommender models. FIITED adapts to dynamic changes in data and adjusts the dimension of each individual embedding vector continuously during training. Experiments show that FIITED is able to reduce the memory consumption of training significantly more than other embedding pruning methods, while maintaining the trained model's quality. In the end, in the aspect of automatic parallelization of training workloads, a novel unified representation of parallelization strategies, incorporating Data Parallelism (DP), Model Parallelism (MP) and Pipeline Parallelism (PP), is proposed, as well as a search algorithm that selects superior parallel settings in the vast search space. An ideal stage partition ratio for synchronous pipelines is derived for the first time, to the best of my knowledge, and it is theoretically proven that unbalanced partitions are better than balanced partitions. In addition, by examining the pipeline schedule, a trade-off between memory and performance is uncovered and explored. Experiments show that hybrid parallel strategies generated with the aforementioned optimizations consistently outperform those without such considerations.
Date: May 9, 2024
Time: 11:00 a.m. - 1:00 p.m.
Location: EEB 110
Zoom link: https://usc.zoom.us/j/95741130954?pwd=dkRkblNlNGt0TlkwOU51SlRNS0hPZz09Location: Hughes Aircraft Electrical Engineering Center (EEB) -
Audiences: Everyone Is Invited
Contact: CS Events
Event Link: https://usc.zoom.us/j/95741130954?pwd=dkRkblNlNGt0TlkwOU51SlRNS0hPZz09