Select a calendar:
Filter October Events by Event Type:
Events for October 26, 2023
-
Maseeh Entrepreneurship Prize Competition
Thu, Oct 26, 2023
Viterbi Technology Innovation and Entrepreneurship
Student Activity
MEPC is accepting applications!
The 2023-2024 Maseeh Entrepreneurship Prize Competition (MEPC) season has officially begun.
We are currently accepting applications until Monday, November 6, 2023 @ 11:59pm.
All USC students are invited to participate, although each team must have at least one current Viterbi student (undergrad or grad). The maximum number of team members is 5.
Each team will participate in educational sessions, workshops, and meetings with mentors throughout the academic year. This year's MEPC programming will begin with a Kickoff on December 7, 2023.
https://viterbiinnovation.usc.edu/competitions-and-programs/mepc/Audiences: Everyone Is Invited
Contact: Viterbi TIE
Event Link: https://viterbiinnovation.usc.edu/competitions-and-programs/mepc/
-
Design Criteria for Human-Centered Natural Language Generation
Thu, Oct 26, 2023 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Kaitlyn Zhou, Stanford University
Talk Title: Design Criteria for Human-Centered Natural Language Generation
Series: NL Seminar
Abstract: Abstract: REMINDER: Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you are highly encouraged to use your USC account to sign into Zoom. If you are an outside visitor, please inform us at nlg DASH seminar DASH host AT isi DOT edu beforehand so we will be aware of your attendance and let you in. In-person attendance will be permitted for USC/ISI faculty, staff, students only. Open to the public virtually via the zoom link. Large language models have made substantial steps towards generating human-like language. However, this endeavor to mimic human language comes with potential drawbacks. By mimicking and appropriating human language, the systems produce language that inherits the harms and cognitive biases of humans while failing to ensure features like clarity and transparency. My research asks: how can generated language avoid the harms of natural language while supporting safe and collaborative human-AI collaboration? Starting with the researchers, I study the quality criteria of natural language generation, using mixed methods approaches to reveal design decisions made consciously and subconsciously by natural language generation by practitioners. Looking through datasets of natural language, I identify the origins of language appropriation and illustrate the safety risks mimicry has via the linguistic miscalibration of language models. Lastly, I study how humans perceive the appropriation of social behaviors such as politeness and refusal and the risks they may pose in chat settings. What I find throughout my research is that language models inappropriately appropriate the style, the use of linguistic cues, and the prosocial language of the human text they are trained on. My future work seeks to develop design criteria for generated language, centered on user-needs, to build training methods to achieve this goal.
Biography: Kaitlyn Zhou is currently pursuing her PhD in computer science at Stanford University, advised by Dan Jurafsky. Her research focuses on investigating the unintended consequences that stem from the appropriation of natural language by language models. Her work delves into various aspects, including the fairness implications associated with the evaluation of natural language generation, the linguistic miscalibration displayed by language models, and the misplaced overconfidence of publicly deployed chatbots. Kaitlyn has previously spent summers at Microsoft Research and the Allen Institute for Artificial Intelligence. She is funded by the Stanford Graduate Fellowship and her visualization techniques have gained recognition in prominent publications like The New York Times and the Wall Street Journal. In 2018, Kaitlyn was appointed by Washington State Governor Jay Inslee to the University of Washington Board of Regents.
Host: Jon May and Justin Cho
More Info: https://nlg.isi.edu/nl-seminar/
Webcast: https://youtu.be/bJC6PFxU99sLocation: Information Science Institute (ISI) - Virtual and ISI-Conf Rm#689
WebCast Link: https://youtu.be/bJC6PFxU99s
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://nlg.isi.edu/nl-seminar/
-
PhD Thesis Proposal - Jared Coleman
Thu, Oct 26, 2023 @ 12:00 PM - 01:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Thesis Proposal - Jared Coleman
Committee Members: Dr. Bhaskar Krishnamachari (Chair), Dr. Rafael Ferreira da Silva, Dr. Jyotirmoy Deshmukh, Dr. Konstantinos Psounis, Dr. Murali Annavaram
Title: Dispersed Computing in Dynamic Environments
Abstract: Task scheduling is a fundamental problem in distributed computing and thus has received substantial scholarly attention. Most existing solutions, however, fall short of accommodating the dynamic and stochastic nature of modern dispersed computing systems (e.g., IoT, edge, and robotic systems). In this proposal, we present our existing work to address this gap and identify theoretical and practical research directions that would build upon our previous work to advance the current state-of-the-art in task scheduling for dynamic environmentsLocation: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Melissa Ochoa
Event Link: https://usc.zoom.us/j/92645861253?pwd=NmRaaE5IeXM0b3VHbEpXRUZzT1Yrdz09
-
CS Colloquium: Luca Luceri (ISI/USC) - AI-Driven Approaches for Countering Influence Campaigns in Socio-Technical Systems
Thu, Oct 26, 2023 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Luca Luceri, ISI / USC
Talk Title: AI-Driven Approaches for Countering Influence Campaigns in Socio-Technical Systems
Abstract: The proliferation of online platforms and social media has sparked a surge in information operations designed to manipulate public opinion on a massive scale, posing significant harm at both the individual and societal levels. In this talk, I will outline a research agenda focused on identifying, investigating, and mitigating orchestrated influence campaigns and deceptive activities within socio-technical systems. I will start by detailing my research efforts in designing AI-based approaches for detecting state-backed troll accounts on social media. Modeling human decision-making as a Markov Decision Process and using an Inverse Reinforcement Learning framework, I will illustrate how we can extract the incentives that social media users respond to and differentiate genuine users from state-sponsored operators. Next, I will delve into a set of innovative approaches I developed to uncover signals of inauthentic, coordinated behaviors. By combining embedding techniques to unveil unexpected similarities in the activity patterns of social media users, along with graph decomposition methods, I will show how we can reveal network structures that pinpoint coordinated groups orchestrating information operations. Through these approaches, I will provide actionable insights to inform regulators in shaping strategies to tame harm, discussing challenges and opportunities to improve the resilience of the information ecosystem, including the potential for interdisciplinary collaborations to address these complex issues.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Luca Luceri is a Research Scientist at the Information Sciences Institute (ISI) at the University of Southern California (USC). His research incorporates machine learning, data and network science, with a primary focus on detecting and mitigating online harms in socio-technical systems. He investigates deceptive and malicious behaviors on social media, with a particular emphasis on problems such as social media manipulation, (mis-)information campaigns, and Internet-mediated radicalization processes. His research advances AI/ML/NLP for social good, computational social science, and human-machine interaction. In his role as a Research Scientist at ISI, Luca Luceri serves as a co-PI of the DARPA-funded program INCAS, aiming to develop techniques to detect, characterize, and track geopolitical influence campaigns. Additionally, he is the PI of a Swiss NSF-sponsored project called CARISMA, which develops network models to simulate the effects of moderation policies to combat online harms.
Host: CS Department
Location: Olin Hall of Engineering (OHE) - 136
Audiences: Everyone Is Invited
Contact: CS Faculty Affairs
-
Roblox Trojan Talk
Thu, Oct 26, 2023 @ 06:15 PM - 08:00 PM
Viterbi School of Engineering Career Connections
Workshops & Infosessions
Working at Roblox: Early Careers, Product, and Generative AI
This workshop will cover the recruiting process at Roblox, the firsthand intern experience, the exciting happenings within the company, and a keynote from the director of our Generative AI product. There will also be time for Q&A, networking, and food!
Thurs, October 26th, 6:15-8 pm
Stauffer Science Lecture Hall (SHL) Room 101
Agenda:
6:00-6:30: Set up
6:30-6:45: Intro by Louis and Nnaemeka
6:45-7:10: Gen AI Presentation by Josh Anon
7:10-7:30: Q&A
7:30-8:00: Network
Target student majors: All degrees under Thomas Lord CS dept, including but not limited to:
CS (all, including CS games & CSBA)
DS (all, including comm DS)
Target student degree levels:
Bachelor's
Master's
PHDLocation: John Stauffer Science Lecture Hall (SLH) - 100
Audiences: Everyone Is Invited
Contact: RTH 218 Viterbi Career Connections