Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:



University Calendar
Events for March

  • PhD Thesis Proposal - Aaron Chan

    Fri, Mar 11, 2022 @ 03:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Aaron Chan

    Title: Generating and Utilizing Machine Explanations for Trustworthy NLP

    Time: Friday, March 11, 3:00PM-5:00PM PST

    Committee: Xiang Ren, Robin Jia, Jesse Thomason, Bistra Dilkina, Morteza Dehghani

    Abstract:
    Neural language models (NLMs) have achieved remarkable success on a wide range of natural language processing (NLP) tasks. However, NLMs sometimes exhibit undesirable behavior, which can be difficult to resolve due to NLMs' opaque reasoning processes. Such a lack of transparency poses serious concerns about NLMs' trustworthiness in high-stakes decision-making.

    This motivates the use of machine explanations to automatically interpret how NLMs make decisions. In my thesis proposal, I argue that building human trust in NLP systems requires being able to: (A) generate machine explanations for NLM behavior faithfully and plausibly, and (B) utilize machine explanations to improve language model decision-making.

    First, I introduce a framework for optimizing machine explanations w.r.t. both faithfulness and plausibility, without compromising the NLM's task performance. Second, I present an algorithm for regularizing NLMs via machine explanations, in order to improve NLM task performance. Third, I discuss using limited human-in-the-loop feedback on machine explanations to further improve NLMs' generalization ability.

    Zoom Link: https://usc.zoom.us/j/99570395469?pwd=OE9IMnhLOU5oSmRCYzFiUWdMZ1BuZz09

    WebCast Link: https://usc.zoom.us/j/99570395469?pwd=OE9IMnhLOU5oSmRCYzFiUWdMZ1BuZz09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    OutlookiCal
  • PhD Defense - Chaoyang He

    Fri, Mar 25, 2022 @ 11:00 AM - 12:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Time: 11AM - 12:30PM, March 25th, 2022

    Committee Members: Salman Avestimehr (Chair), Mahdi Soltanolkotabi, Murali Annavaram, Ram Nevatia, Xiang Ren

    Zoom Link: https://usc.zoom.us/my/usc.chaoyanghe

    Title: Federated and Distributed Machine Learning at Scale: From Systems to Algorithms to Applications

    Abstract:
    Federated learning (FL) is a machine learning paradigm that many clients (e.g. mobile/IoT devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. It has shown huge potential in mitigating many of the systemic privacy risks, regulatory restrictions, and communication costs resulting from traditional, centralized machine learning and data science approaches in healthcare, finance, smart city, autonomous driving, and the Internet of things. Though it is promising, landing FL into trustworthy data-centric AI infrastructure faces many realistic challenges from learning algorithms (e.g., data heterogeneity, label deficiency) and distributed systems (resource constraints, system heterogeneity, security, privacy, etc.), requiring interdisciplinary research in machine learning, distributed systems, and security/privacy. Driven by this goal, this thesis focuses on scaling federated and distributed machine learning end-to-end, from algorithms to systems to applications.

    In the first part, we focus on the design of the distributed system for federated and distributed machine learning. We propose FedML, a widely adopted open-source library for federated learning, and PipeTransformer, which leverages automated elastic pipelining for efficient distributed training of Transformer models. FedML supports three computing paradigms: on-device training using a federation of edge devices, distributed training in the cloud that supports exchanging of auxiliary information beyond just gradients, and single-machine simulation of a federated learning algorithm. FedML also promotes diverse algorithmic research with flexible and generic API design and comprehensive reference baseline implementations (optimizer, models, and datasets). In PipeTransformer, we design an adaptive on the fly freeze algorithm that can identify and freeze some layers gradually during training, and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width.

    In the second part, we propose a series of algorithms to scale up federated learning by breaking many aforementioned constraints, such as FedGKT, an edge-cloud collaborative training for resource-constrained clients, FedNAS, a method towards automation on invisible data via neural architecture search, SpreadGNN, effective training on decentralized topology, SSFL, tackling label deficiency via personalized self-supervision, and LightSecAgg, the lightweight and versatile secure aggregation. Most algorithms are compatible with each other. Specially, we unified all implementations under the FedML framework. Therefore, under the complex constraints of the real world, the orchestration of these algorithms has the potential to greatly enhance the scalability of federated learning.

    Finally, we further propose FedML Ecosystem, which is a family of open research libraries to facilitate federated learning research in diverse application domains. FedNLP (Natural Language Processing), FedCV (Computer Vision), FedGraphNN (Graph Neural Networks), and FedIoT (Internet of Things). Compared with TFF and LEAF, FedNLP and FedCV greatly enrich the diversity of data sets and learning tasks. FedNLP supports various popular task formulations in the NLP domain, such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling. FedCV can help researchers evaluate the three most representative tasks: image classification, image segmentation, and object detection. Moreover, FedGraphNN is the first FL research platform for analyzing graph-structured data using Graph Neural Networks in a distributed computing manner, filling the gap between federated learning and the data mining field. Going beyond traditional AI applications, FedIoT further extends FL to perform in wireless communication (e.g., 5G) and mobile computing (e.g., embedded IoT devices such as Raspberry PI, smartphones running on Android OS).

    WebCast Link: https://usc.zoom.us/my/usc.chaoyanghe

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    OutlookiCal