Logo: University of Southern California

Events Calendar

Select a calendar:

Filter August Events by Event Type:

Events for August 23, 2023

  • Computer Science General Faculty Meeting

    Wed, Aug 23, 2023 @ 11:30 AM - 01:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events

    By-weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Location: Ronald Tutor Hall of Engineering (RTH) - 526

    Audiences: Invited Faculty Only

    Contact: Assistant to CS Chair

  • AME Seminar

    Wed, Aug 23, 2023 @ 03:30 PM - 04:30 PM

    Aerospace and Mechanical Engineering

    Conferences, Lectures, & Seminars

    Speaker: Pradeep Sharma, Chair of Mechanical Engineering Department of Mechanical Engineering University of Houston

    Talk Title: Flexoelectricity and Electrets

    Abstract: The ability of certain materials to convert electrical stimuli into mechanical deformation, and vice versa, is a prized property. Not surprisingly, applications of such so-called piezoelectric materials are broad-”ranging from energy harvesting to self-powered sensors. In this presentation, I will highlight a relatively understudied electromechanical coupling called flexoelectricity that appears to have implications in topics ranging from biophysics to the design of next-generation soft multifunctional materials. Specifically, I will argue, through computational examples, the tantalizing possibility of creating apparently piezoelectric materials without piezoelectric materials-”e.g. graphene, emergence of giant piezoelectricity at the nanoscale, and (among others) the mechanisms underpinning magnetoreception in certain animals.

    Biography: Pradeep Sharma is the Hugh Roy and Lillie Cranz Cullen Distinguished University Professor and Chair of Mechanical Engineering at the University of Houston. He also has a joint appointment in the Department of Physics. He received his Ph.D. in mechanical engineering from the University of Maryland at College Park in the year 2000. Subsequent to his doctoral degree, he was employed at General Electric R & D for more than three years as a research scientist. He joined the department of mechanical engineering at University of Houston in January 2004. He is a member of the US National Academy of Engineering. His other honors and awards include the Young Investigators Award from Office of Naval Research, Thomas J.R. Hughes Young Investigator Award from the ASME, Texas Space Grants Consortium New Investigators Program Award, the Fulbright fellowship, the Melville medal, the James R. Rice medal from the Society of Engineering Science, ASME Charles R. Russ medal, the Guggenheim, and the University of Houston Research Excellence Award. He is a fellow of the ASME, the associate editor of the Journal of the Mechanics and Physics of Solids, chief-editor of the Journal of Applied Mechanics and serves on the editorial board of several other journals. He specializes in the broadly defined fields of continuum mechanics of solids and theoretical and computational materials science.

    Host: AME Department

    More Info: https://ame.usc.edu/seminars/

    Location: Seaver Science Library (SSL) - 202

    Audiences: Everyone Is Invited

    Contact: Tessa Yao

    Event Link: https://ame.usc.edu/seminars/

  • PhD Thesis Defense - Sarik Ghazarian

    Wed, Aug 23, 2023 @ 04:00 PM - 06:00 PM

    Thomas Lord Department of Computer Science

    University Calendar

    PhD Thesis Defense - Sarik Ghazarian

    Committee Members:Aram Galstyan, Nanyun Peng, Kallirroi Georgila, Gaurav Sukhatme, Morteza Dehghani

    Title: Automatic Evaluation of Open Domain Dialogue Systems

    Abstract: With the rapid development of open domain dialogue systems in recent years, it is imperative to have precise evaluation metrics that correctly assess the quality of these systems. To this end, many researchers resort primarily to human evaluation which is time consuming, expensive and it does not facilitate the model comparisons across research papers. Therefore, the existence of accurate automatic evaluation metrics that can accelerate the development cycle by assisting the process of architecture search and hyperparameter tuning is necessary. Reference based metrics such because BLEU or ROUGE fail to correlate well with human judgment in open domain settings as there can be potentially many plausible generations that do not overlap significantly with the limited set of given references. This failure leads the research towards learning based evaluation metrics that are more sophisticated and reliable.
    Automatic evaluation of open domain dialogue systems has a multifaceted nature with many fine grained quality aspects. This dissertation explores both turn level and conversation level facets of open-domain dialogue evaluation. We train models that automatically assess the relevance, engagement, coherence, and commonsense aspects of the responses generated by dialogue models. We formulate the evaluation as a classification task to identify the quality of the responses. To this end, we focus on training data and model architecture of these metrics as two main components that metrics quality strongly relies on them. We start with heuristic text level manipulations such as random swapping of utterances to create negative samples for training evaluation metrics. Then, we show that such manipulations are insufficient to appropriately reflect the issues that occur in interactions between advanced dialogue models and human. To tackle this issue, we move forward toward proposing advanced semantic level perturbations of human written responses to generate challenging negative responses that are more likely to be generated by state of the art dialogue models. Next, we complete our investigation on dialogue evaluation by concentrating on the model architecture of these metrics by incorporating knowledge from knowledge bases and leveraging prompt based generative models in a low resource setting. Finally, in addition to dialogue assessment, the main goal of automatic evaluation metrics, we leverage them as influential control factors to guide dialogue models and generate higher quality responses.

    Audiences: Everyone Is Invited

    Contact: Melissa Ochoa

    Event Link: https://usc.zoom.us/j/97105095544?pwd=Q05tWTdLSFdhNS9EY2JRMklWbHRkUT09