Select a calendar:
Filter September Events by Event Type:
Conferences, Lectures, & Seminars
Events for September
-
CS Colloquium
Tue, Sep 14, 2010 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Dr. Daniel Golovin , Cal Tech
Talk Title: Adaptive Submodularity: A New Approach to Active Learning and Stochastic Optimization
Abstract: Solving stochastic optimization problems under partial observability, where one needs to adaptively make decisions with uncertain outcomes, is a fundamental but notoriously difficult challenge. In this talk, I will introduce a new concept called adaptive submodularity, which generalizes submodular set functions to adaptive policies. In many respects adaptive submodularity plays the same role for adaptive problems as submodularity plays for nonadaptive problems. Specifically, just as many nonadaptive problems with submodular objectives have efficient algorithms with good approximation guarantees, so too do adaptive problems with adaptive submodular objectives. We use this fact to recover and generalize several previous results in adaptive optimization, including results for active learning and adaptive variants of maximum coverage and set cover. Applications include machine diagnosis, observation selection and sensor placement problems, and an adaptive version of a viral marketing problem studied by Kempe et al. Joint work with Andreas Krause.
Biography: Daniel Golovin is a postdoctoral fellow in Caltech's Center for the Mathematics of Information. His current research mainly focuses on online and approximation algorithms for machine learning and optimization, with an eye towards creating principled solutions that work well in practice. Prior to joining Caltech, he obtained a PhD from Carnegie Mellon University in 2008, and spent an additional year there at the Center for Computational Thinking. He did his undergraduate work at Cornell University.
Host: Dr. David Kempe
Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: Kanak Agrawal
-
CS Colloquium
Thu, Sep 16, 2010 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Dr. Ashwin Ram, Georgia Tech
Talk Title: Real-Time Case-Based Reasoning for Interactive Digital Entertainment
Abstract: User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.
To understand why authoring Game AI is hard, we need to understand how Game AI works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.
I propose an alternative approach to designing Game AI: Real-Time CBR (Case-Based Reasoning). This approach extends CBR to real-time systems that operate asynchronously during game play, planning, adapting, and learning in an online manner. Originally developed for robotic control, Real-Time CBR can be used for interactive games ranging from multiplayer strategy games to interactive believable avatars in virtual worlds.
As with any CBR technique, Real-Time CBR integrates problem solving with learning. This property can be used to address the authoring problem. I will show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them without programming. I conclude with some thoughts about the future of AI-based Interactive Digital Entertainment.
Biography: Dr. Ashwin Ram is an Associate Professor and Director of the Cognitive Computing Lab in the College of Computing at Georgia Tech, and an Adjunct Professor in Psychology at Georgia Tech and in MathCS at Emory University. He received his PhD from Yale University in 1989, his MS from University of Illinois in 1984, and his BTech from IIT Delhi in 1982. He has published 2 books and over 100 scientific articles in international forums. He is a founder of Enkia Corporation which develops AI software for social media applications, and OpenStudy.com which is an online social learning network for students and faculty.
Host: Dr. Milind Tambe
Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: Kanak Agrawal
-
CS Colloquium
Mon, Sep 20, 2010 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Dr. Norman Sadeh, Carnegie Mellon University
Talk Title: User-Controllable Security and Privacy: Lessons from the Design and Deployment of a Family of Location Sharing Applications
Abstract: Increasingly users are expected to configure a variety of security and privacy policies on their own, whether it is the firewall on their home computer, their privacy preferences on Facebook, or access control policies at work. In practice, research shows that users often have great difficulty specifying such policies. This in turn can result in significant vulnerabilities. This presentation will provide an overview of novel user-controllable security and privacy technologies and interfaces developed to empower users to more effectively and efficiently specify security and privacy policies. In particular, it will outline a new methodology to design expressive privacy and security policies that derives from new work in mechanism design and usability. Results from this research shed some light on why despite all the hoopla, most location sharing applications available in the market place today have failed to gain much traction.
Biography: Norman Sadeh is a Professor in the School of Computer Science at Carnegie Mellon University. His broad research interests include Web Security,
Privacy and Commerce. He is co-Director of the School of Computer Science PhD Program in Computation, Organizations and Society and of the School's Mobile Commerce Lab. Norman has been on the faculty at Carnegie Mellon since 1991. In the late nineties, he also served as Chief Scientist of the European Union's $800M e-Work and e-Commerce program, which at the time included all European-level cyber security and online privacy research. He has authored over 160 scientific publications, including several books and has also co-founded two companies to commercialize his technologies: Wombat Security Technologies and Zipano Technologies. Among other awards and honors, Norman was a co-recipient of IBM's 2004 Best Academic Privacy Faculty award. Norman received his PhD in computer science from Carnegie Mellon University, an MSc, also in computer science, from the University of Southern California, and a BS/MSc in Electrical Engineering and Applied Physics from Brussels Free University.
Host: Dr. Milind Tambe
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Kanak Agrawal
-
CS Colloquium
Tue, Sep 21, 2010 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Prof. Gert Lankriet , UCSD
Talk Title: Multimodal Music Search and Discovery
Abstract: The revolution in production and distribution of music, which has made millions of audio clips instantly available to millions of people, has created the need for novel music search and discovery technologies.
While successful technologies with great societal impact exist for text-based document search (e.g., Yahoo!, Google, etc.), a Google for Music has yet to stand up: there is no easy way to find a mellow Beatles song on a nostalgic night, scary Halloween music on October 31st, or address a sudden desire for romantic jazz with saxophone and deep male vocals without knowing an appropriate artist or song title.
The non-text-based, multimodal character of Internet-wide information about music (audio clips, lyrics, web documents, artist networks, band images, etc.) poses a new and difficult challenge to existing database technology, due to its dependence on unimodal, text-based data structures. Two fundamental research questions are at the core of addressing this challenge: 1) The automated indexing of non-text based music content and 2) the automated integration of the heterogeneous content of multimodal music databases, to retrieve the most relevant information, given a query.
In this talk, I will outline some of my recent research in machine learning, statistics and optimization, inspired and driven by the previous two research questions in the emerging field of computer audition and music information retrieval. This will cover a spectrum from sparse generalized eigenvalue problems to human computation games, and from clustering graphical models to multiple-kernel partial order embeddings.
Biography: Gert Lanckriet received a Master's degree in Electrical Engineering from the Katholieke Universiteit Leuven, Leuven, Belgium, in 2000 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Science from the University of California, Berkeley in 2001 respectively 2005. In 2005, he joined the Department of Electrical and Computer Engineering at the University of California, San Diego, where he heads the Computer Audition Laboratory. He was awarded the SIAM Optimization Prize in 2008 and is the recipient of a Hellman Fellowship and an IBM Faculty Award. His research focuses on the interplay of convex optimization, machine learning and applied statistics, with applications in computer audition and music information retrieval.
Host: Prof. Fei Sha
Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: Kanak Agrawal
-
CS Colloquium
Tue, Sep 28, 2010 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Prof. Irfan Essa, Georgia Tech/ School of Interactive Computing
Talk Title: Two Short Talks on Video Analysis (1) Segmentation of Video and (2) Prediction of Actions in Video
Abstract: My research group is focused on a variety of approaches for video analysis and synthesis. In this talk, I will concentrate on two of our recent efforts. One effort aimed at robust spatio-temporal segmentation of video and another on using motion and flow to predict actions from video.
In the first part of the talk, I will present an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. In this work, we begin by over egmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to
create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subsequent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph. I will demonstrate a variety of examples of how this robust segmentation works, and will show additional examples of video-retargeting that use spatio-temporal saliency derived from this segmentation approach. (Matthias Grundmann, Vivek Kwatra, Mei Han, Irfan Essa, CVPR 2010, in collaboration with Google Research).
In the second part of this talk, I will show that constrained multi-agent events can be analyzed and even predicted from video. Such analysis requires estimating the global movements of all players in the scene at any time, and is needed for modeling and predicting how the multi-agent play evolves over time on the playing field. To this end, we propose a novel approach to detect the locations of where the play evolution will proceed, e.g. where interesting events will occur, by tracking player positions and movements over time. To achieve this, we extract the ground level sparse movement of players in each time-step, and then generate a dense motion field. Using this field we detect locations where the motion converges, implying positions towards which the play is evolving. I will show examples of how we have tested this approach for soccer, basketball and hockey. (Kihwan Kim, Matthias Grundmann, Ariel Shamir, Iain Matthews, Jessica Hodgins, Irfan Essa, CVPR 2010, in collaboration with Disney Research). Time permitting, I will show some more videos of our recent work on video analysis and synthesis. For more information, papers, and videos, see my website at http://prof.irfanessa.com/
Biography: Irfan Essa is a Professor in the School of Interactive Computing(iC) of the College of Computing (CoC), and Adjunct Professor in the School of Electrical and Computer Engineering, Georgia Institute of Technology(GA Tech), in Atlanta, Georgia, USA. Irfan Essa works in the areas of Computer Vision, Computer Graphics, Computational Perception, Robotics and Computer Animation, with potential impact on Video Analysis and Production (e.g., Computational Photography & Video, Image-based Modeling and Rendering, etc.) Human Computer Interaction, and Artificial Intelligence research. Specifically, he is interested in the analysis, interpretation, authoring, and synthesis (of video), with the goals of building aware environments, recognizing, modeling human activities, and behaviors, and developing dynamic and generative representations of time-varying streams. He has published over a 150 scholarly articles in leading journals and conference venues on these topics and has awards for his research and teaching. He joined Georgia Tech Faculty in 1996 after his earning his MS (1990), Ph.D. (1994), and holding research faculty position at the Massachusetts Institute of Technology (Media Lab) [1988-1996]. His Doctoral Research was in the area of Facial Recognition, Analysis, and Synthesis.
Host: Prof. Gerard Medioni
Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: Kanak Agrawal
-
CS Talk
Thu, Sep 30, 2010 @ 12:30 PM - 02:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Prof. Xifeng Yan , UCSB
Talk Title: Graph Pattern Mining
Abstract: Graphs and networks are ubiquitous, encoding complex relationships ranging from chemical bonds to social interactions. Hidden in these networks are the answers to many important questions in biology, business, and sociology. We are developing a general graph information system to address the needs of searching and mining complex networks in these domains. In this talk, I will introduce the basic concept of graph pattern mining, its algorithms and applications including graph classification and graph search. I will also briefly discuss my recent work on business workflow analysis, malware detection, etc.
Biography: Xifeng Yan is an assistant professor at the University of California at Santa Barbara, and holds the Venkatesh Narayanamurti Chair in Computer Science. He received his Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2006. He was a research staff member at the IBM T. J. Watson Research Center between 2006 and 2008. He has been working on modeling, managing, and mining large-scale graphs in bioinformatics, social networks, the Web, and computer systems. His works were extensively referenced, with over 3,000 citations per Google Scholar. For more information, please visit www.cs.ucsb.edu/~xyan.
Host: Prof. Yan Liu
Location: Mark Taper Hall Of Humanities (THH) - 106
Audiences: Everyone Is Invited
Contact: Kanak Agrawal