Logo: University of Southern California

Events Calendar



Select a calendar:



Filter April Events by Event Type:


SUNMONTUEWEDTHUFRISAT
13
14
16
17
19

20
21
23
25
26

27
28
30
2
3


Conferences, Lectures, & Seminars
Events for April

  • CS Colloq: Securing the Web With Decentralized Information Flow Control

    Tue, Apr 01, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Securing the Web With Decentralized Information Flow ControlSpeaker: Maxwell Krohn (MIT)Abstract:
    The recent successes of server-side applications (e.g. Google and Facebook applications) hint that tomorrow's computing platform might not be the local desktop but rather the extensible remote Web site. Unfortunately, these new server-side platforms, built on conventional operating systems, are committing the same security mistakes already ossified in today's insecure desktops. In this talk, I will discuss how to secure both today's Web sites and tomorrow's Web computing platforms with a new OS technique called Decentralized Information Flow Control (DIFC). A DIFC system tracks the flow of secret data as it is copied from file to file and communicated from process to process. In the end, the OS lets modules known as "declassifiers" legislate policies for secret data exiting to the network. DIFC provides better security than standard OSes because it allows developers to concentrate security-critical code in small, audit-friendly declassifiers, which remain small and contained even as the overall system balloons with new features. This talk presents DIFC, an implementation of DIFC for Linux, and a case study of a complex, popular open-source application (MoinMoin Wiki) secured with DIFC. MoinMoin is a prototype for more ambitious and general work to come, such as a novel Web-based application platform with encouraging security guarantees. Joint work with: Micah Brodsky, Natan Cliffer, Petros Efstathopoulos, Cliff Frey, Eddie Kohler, David Mazieres, Robert Morris, Frans Kaashoek, Steve VanDeBogart, Mike Walfish, Alex Yip, David ZieglerBiography:
    Maxwell Krohn is a PhD candidate in Computer Science at MIT. He received his BA from Harvard in 1999, and was a staff research scientist at NYU from 2002-2003. In between, he co-founded and co-built several community Web sites, some vintage (TheSpark.com), others live and kicking (SparkNotes.com and OkCupid.com). His research interests are in operating systems, distributed systems and security.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Acquisition, Compression, and Transfer of Reflectance Fields

    Wed, Apr 02, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Acquisition, Compression, and Transfer of Reflectance FieldsSpeaker: Dr. Pieter Peers (ICT)Abstract:
    A reflectance field describes the light transport through a scene in terms of incident and radiant illumination. Full knowledge of a scene's reflectance field allows to view this scene from any viewpoint and under any illumination condition. This simple yet powerful formulation is the basis for many image-based methods, and is the main focus of my research. My research can be categorized into three main topics: acquisition, compression and transfer of reflectance fields. For each of these research topics, a selected contribution is discussed in detail. The first presented contribution describes an acquisition method that enables to capture detailed reflectance fields for image-based relighting using non-adaptive illumination (i.e., measurement) patterns. These measurement patterns are incoherent with the reflectance field itself, and the number of measurement patterns is proportional to the size of the compressed field, rather than the size of full uncompressed field. The second discussed contribution details a novel compression method for (measured) heterogeneous subsurface scattering, i.e., the way light scatters through a non-homogeneous semi-opaque medium. This compression is based on a specially developed matrix factorization method. A third contribution, demonstrates a post-production method that successfully generates plausible relit sequences of dynamic facial performances of a subject. Relighting is achieved by transferring reflectance information from a single reflectance field of a static pose of a potentially different subject, but similar in appearance. In the final part of this presentation, an overview of short term and long term research plans are given. Additionally, some preliminary results of recent research are shown.Biography:
    Pieter Peers is currently a senior researcher in the Graphics Lab at the Institute for Creative Technologies (ICT) of the University of Southern California (USC). Before that he was a research assistant in the Computer Graphics Research Group at the K.U.Leuven (Belgium), where he also obtained his Ph.D. in August 2006. His research focuses on data-driven computer graphics, in particular image-based relighting.

    Location: Henry Salvatori Computer Science Center (SAL) - 322

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Places Everyone: Creating an Animated Tapestry of Human Activity for Virtual Worlds

    Thu, Apr 03, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Places Everyone: Creating an Animated Tapestry of Human Activity for Virtual WorldsSpeaker: Jan Allbeck(UPENN)Abstract:
    As we journey through our day, our lives intersect with other people. We see people leaving for work, waiting for trains, meeting with friends, hard at work, and thousands of other activities that we may not even be conscious of. People create a rich tapestry of activity throughout our day, a human texture. We may not always be aware of this texture, but we would definitely notice if it were missing, and it is missing from many simulations. Creating virtual scenarios that simulate a substantial human population with typical and varied behaviors can be an overwhelming task. In addition to modeling the environment and characters, tagging the environment with semantic data, and creating motions for the characters, the simulation engineer also needs to create character profiles for a heterogeneous population and link these character traits to appropriate behaviors to be performed at appropriate times and in appropriate places during the simulation. Due to the large number of individuals, the variety of behaviors they may engage in, and the potential complexity of environments, this is currently beyond the scope of military, crowd research, or entertainment simulations. At present, simulations either have a very limited number of character profiles or are meticulously hand scripted. I will describe a framework, called CAROSA (Crowds with Aleatoric, Reactive, Opportunistic, and Scheduled Actions), that will facilitate the creation of heterogeneous populations for large scale simulations by using a commercial off-the-shelf software package (Microsoft Outlook®), a Parameterized Action Representation (PAR), and multiple human agent simulation software (HiDAC). CAROSA incorporates four different broad action types: scheduled, reactive, opportunistic, and aleatoric. Scheduled activities arise from specified roles for individuals or groups; reactive actions are triggered by contextual events or environmental constraints; opportunistic actions arise from explicit goals and priorities; aleatoric actions are random but structured by choices, distributions, or parametric variations. The CAROSA architecture enables the specification and control of actions for more realistic large scale human textures in virtual worlds such as buildings and cities, links human characteristics and high level behaviors to animated graphical depictions, and relieves some of the burden in creating and animating heterogeneous 3D animated human populations.Biography:
    I am a Ph.D. candidate in the Department of Computer and Information Science, which is a part of the School of Engineering and Applied Science at the University of Pennsylvania. My advisor is Dr. Norman I. Badler. I am also Associate Director of the Center for Human Modeling and Simulation (HMS), where I coordinate and participate in the research projects affiliated with HMS as well as coordinating the operational aspects of the lab facility. I have Bachelors degrees in Mathematics and Computer Science from Bloomsburg University and a Masters degree in Computer and Information Science from Penn. I have had the great opportunity to explore many aspects of computer graphics, but am most drawn to research at the crossroads of animation, artificial intelligence, and psychology in the simulation of virtual humans. My current research focuses on the creation and simulation of heterogeneous, functional crowds.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Fluid Simulation With Reduced Diffusion, Dissipation, and Volume Loss

    Tue, Apr 08, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Fluid Simulation With Reduced Diffusion, Dissipation, and Volume LossSpeaker: Dr. ByungMoon Kim (GATECH)Abstract:
    Recent advances in simulation methods for three-dimensional computer animation have led to a significant increase in realism and have benefited the digital entertainment industry. We will discuss two methods for improving the realism in fluid simulations: (1) The improved BFECC advection that increases the dynamic in simulated fluid motion and (2) a volume control technique that prevent the loss of fluid volume. We will demonstrate these advances on simulations of smoke, liquid, bubbles, rigid bodies, and foam. More detailed summaries of these two methods are provided below. We will also discuss future opportunities afforded by the symbiotic relation between scientific computing and computer animations. The BFECC (Back and Forth Error Compensation and Correction) was recently developed for interface computation using a level set method. We show that BFECC can be applied to reduce dissipation and diffusion encountered in a variety of advection steps, such as velocity, smoke density, and image advections on uniform and adaptive grids and on a triangulated surface. BFECC can be implemented trivially as a small modification of the first-order upwind or semi-Lagrangian integration of advection equations. It provides second-order accuracy in both space and time. When applied to level set evolution, BFECC reduces volume loss significantly. We demonstrate the benefits of this approach on image advection and on the simulation of smoke, bubbles in water, and the highly dynamic interaction between water, a solid, and air. We also apply BFECC to dye advection to visualize vector fields. Liquid and gas interactions often contain bubbles that stay for a long time without bursting on the surface, making a dry foam structure. Such long lasting bubbles simulated by the level set method can suffer from a slow but steady volume error that accumulates to a visible amount of volume change. We propose to address this problem by using the volume control method. We trace the volume change of each connected region, and apply a carefully computed divergence that compensates undesired volume changes. To compute the divergence, we construct a mathematical model of the volume change, choose control strategies that regulate the modeled volume error, and establish methods to compute the control gains that provide robust and fast reduction of the volume error, and (if desired) the control of how the volume changes over time.Biography:
    ByungMoon Kim received a Ph. D. in computer science in 2006 at the Georgia Institute of Technology. At the same school, he received master's degrees in Aerospace Engineering in 1999, Computer Science in 2005, and Mathematics in 2005. He received a bachelor's degree in Aerospace Engineering in Inha University, Inchon, Korea in 1994. After receiving the Ph. D., he worked as a temporary professor in Mathematics at the Georgia Institute of Technology to teach a class and to research fluid simulation and image processing. After this, he joined NVIDIA Corp, where he worked on graphics device driver development, real time graphics research, and physics simulations. His research interests are in computer graphics, focusing on fluid simulation, geometry processing such as mesh filtering and editing, and haptic devices. He is an author of papers on various topics: mobile robot control, a spacecraft simulator, collision prediction, mesh editing, nonphotorealistic video processing, a mesh filter, a realtime shadow algorithm, and fluid simulations.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Learning^3: Multi-Agent, Teacher-Agent, and Tutor-Student

    Wed, Apr 09, 2008 @ 03:30 PM - 04:30 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Learning^3: Multi-Agent, Teacher-Agent, and Tutor-StudentSpeaker: Dr. Yu-Han Chang (ISI)Abstract:
    Learning is crucial aspect of any intelligent agent. The bulk of this talk with focus on our results in multi-agent learning, where agents must learn to adapt in environments populated with other adaptive, autonomous agents. I'll also spend some time briefly describing new projects in teachable agents, where agents can learn more rapidly by receiving interactive human instruction, and adaptive tutoring systems, where the tutoring system must learn to adapt to differing student capabilities and styles. In multi-agent environments, learning must account for the adaptive nature of the other agents. Traditional models such as MDPs, POMDPs, and game theoretic equilibria each have their shortcomings in this domain: e.g. the environment is not Markov, or the other agents may not be entirely rational. Regret is a principled framework for evaluating the performance of multi-agent learning algorithms, and regret-minimizing algorithms offer a good approach to this domain, one that does not need to make strong assumptions regarding expected types of opponents. I'll describe an algorithm that exhibits good performance against a wide range of possible opponents, and guarantees low regret against any arbitrary opponent.Biography:
    Dr. Yu-Han Chang is a Computer Scientist at the Information Sciences Institute of the University of Southern California. His current research interests range from reinforcement learning and game theory to natural language understanding and interactive games. Recent and ongoing projects include using machine learning to improve education, "learning by noticing", planning in continuous battle spaces, training intelligent agents via interactive games, and developing no-regret algorithms for learning in non-cooperative domains. Dr. Chang holds undergraduate degrees in Mathematics and Economics, as well as a S.M. in Computer Science, from Harvard University. He received his Ph.D. in Electrical Engineering and Computer Science from MIT, where he developed algorithms for multi-agent learning in the context of machine learning and game theory.

    Location: Henry Salvatori Computer Science Center (SAL) - 322

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Dynamics of Real-World Networks

    Thu, Apr 10, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Dynamics of Real-World NetworksSpeaker: Jure Leskovec (CMU)Abstract:
    Emergence of the web and cyberspace gave rise to detailed traces of human social activity. This offers great opportunities to analyze and model behaviors of millions of people. For example, we examined ''planetary scale'' dynamics of a full Microsoft Instant Messenger network that contains 240 million people, with more than 255 billion exchanged messages per month (4.5TB of data), which makes it the largest social network analyzed to date. In this talk I will focus on two aspects of the dynamics of large real- world networks: (a) dynamics of information diffusion and cascading behavior in networks, and (b) dynamics of the structure of time evolving networks. First, I will consider network cascades that are created by the diffusion process where behavior cascades from node to node like an epidemic. We study two related scenarios: information diffusion among blogs, and a viral marketing setting of 16 million product recommendations among 4 million people. Motivated by our empirical observations we develop algorithms for detecting disease outbreaks and finding influential bloggers that create large cascades. We exploit the ''submodularity'' principle to develop an efficient algorithm that finds near optimal solutions, while scaling to large problems and being 700 times faster than a simple greedy solution. Second, in our recent work we found counter intuitive patterns that change some of the basic assumptions about fundamental structural properties of networks varying over time. Leveraging our observations we developed a Kronecker graph generator model that explains processes governing network evolution. Moreover, we can fit the model to large networks, and then use it to generate realistic graphs and give formal statements about their properties. Estimating the model naively takes O(N!N^2) while we develop a linear time O(E) algorithm.Biography:
    Jure Leskovec (www.cs.cmu.edu/~jure) is a PhD candidate in Machine Learning Departmen at Carnegie Mellon University. He is also a Microsoft Research Graduate Fellow. He received the ACM KDD 2005 and ACM KDD 2007 best paper awards, won the ACM KDD cup in 2003 and topped the Battle of the Sensor Networks 2007 competition. Jure holds three patents. His research interests include applied machine learning and large-scale data mining focusing on the analysis and modeling of large real-world networks as the study of phenomena across the social, technological, and natural worlds.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Learning Low Dimensional Representations of High Dimensional Data

    Tue, Apr 15, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Learning Low Dimensional Representations of High Dimensional DataSpeaker: Dr. Fei Sha(UC Berkeley)Abstract:
    Statistical modeling of high-dimensional and complex data is a challenging task in machine learning. To tackle this problem, a very powerful strategy is to identify and exploit low-dimensional structures intrinsic to the data. For example, text and image data can often be represented as suppositions of meaningful and interpretable structures such as ``object parts'' and ``topics''. These structures are composed of visually salient image patches as well as groups of semantically related words. Examples of such learning algorithms include nonnegative matrix factorization (NMF) and latent Dirichlet allocation (LDA), where parts and topics are encoded by nonnegative basis matrices and probability distributions respectively. In this talk, I will focus on my research that have brought new and interesting developments into the frameworks of NMF and LDA. In the first project, I show how to extend the original NMF approach to learning meaningful ``audio parts'' from speech and audio data. The audio parts robustly encode harmonic structures in the voices, which are key acoustic features for building machines that can analyze complicated acoustic signals as well as human listeners. In the second project, I investigate how to incorporate supervisory information like class labels in LDA models. In the supervised LDA, topics are discovered by grouping words based on not only semantic similarity but also class label proximity. These topics yield compact representation with better predictive powers than those derived from the original unsupervised LDA. Towards the end of the talk, I will summarize briefly my work on learning other types of latent structures such as manifolds and clusters. I will then conclude by discussing all these approaches in a general perspective and speculating a few interesting directions for future work.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Efficient, Adaptive Inference for Distributions on Permutations

    Fri, Apr 18, 2008 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Efficient, Adaptive Inference for Distributions on PermutationsSpeaker: Prof. Carlos Guestrin (CMU)Abstract:
    Permutations are ubiquitous in many real world problems, such as voting, rankings and data association. Representing uncertainty over permutations is challenging, since there are $n!$ possibilities, and typical compact representations, such as graphical models, cannot efficiently capture the mutual exclusivity constraints associated with permutations. In this talk, we use the ``low-frequency'' terms of a Fourier decomposition to represent such distributions compactly. We first describe how the two standard probabilistic inference operations, conditioning and marginalization, can be performed entirely in the Fourier domain in terms of these low frequency components, without ever enumeration $n!$ terms. We also describe a novel approach for adaptively picking the complexity of this representation in order control the resulting approximation error. We demonstrate the effectiveness of our approach on a real camera- based multi-people tracking setting.Biography:
    I am an assistant professor in the Machine Learning Department and in the Computer Science Department at Carnegie Mellon University. I co- direct the Sense, Learn, and Act (Select) Lab with Geoff Gordon. In 2003-2004, I spent a year as a senior researcher at the Intel Research Lab in Berkeley. In August 2003, I received my Ph.D. in Computer Science from Stanford University, where I was advised by Daphne Koller, in the DAGS research group. I received a Mechatronics Engineer (Mechanical Engineering, with emphasis in Automation and Systems) degree in 1998 from the Polytechnic School of the University of São Paulo, Brazil.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Simple Encryption Schemes Against Sophisticated Attacks

    Tue, Apr 22, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Simple Encryption Schemes Against Sophisticated AttacksSpeaker: Dr. Hoeteck Wee (Columbia)Abstract:
    Along with the increasing reliance on computers and the Internet for myriad tasks from voting to auctions evolves a pressing need to develop cryptographic tools and protocols with stronger guarantees. Traditional cryptographic guarantees such as data privacy amidst wiretapping and security against a static collection of malicious network entities do not meet the security requirements for many of these tasks: -- An adversary may be unable to learn your bid in an online auction if the bid is encrypted; however, it could potentially modify the ciphertext to obtain one corresponding to a bid that is a dollar higher than yours. -- An adversary that adaptively determines which electronic voting machines to break into during the course of an election has a better chance at influencing the outcome of an election than one that makes its choices before the election commences. I will present new constructions of encryption schemes addressing each of these attacks. The first scheme guarantees that given an encryption of a message, it is infeasible to generate an encryption of a related message. The second improves upon an important building block used in constructing protocols for general multi-party computation that are secure against an adversary that adaptively corrupts up to one third of the parties. Compared to most previous constructions, our schemes are simpler, more efficient, and can be realized under a larger class of cryptographic assumptions.Biography:
    Hoeteck Wee is a postdoctoral researcher at Columbia University. He completed his undergraduate studies at MIT and his PhD at UC Berkeley under the supervision of Luca Trevisan. He was a visiting student at Tsinghua University (Beijing) from Aug 2005 till Jun 2006, a core participant in the program on Securing Cyberspace at IPAM (UCLA) in Fall 2006, and a (one-time photographic) contributor to the Schmap Amsterdam Guide.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Random Projection Trees and Low Dimensional Manifolds

    Tue, Apr 29, 2008 @ 03:30 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Title: Random Projection Trees and Low Dimensional ManifoldsSpeaker: Prof. Sanjoy Dasgupta (UCSD)Abstract:
    The curse of dimensionality has traditionally been the bane of nonparametric statistics (histograms, kernel density estimation, nearest neighbor search, and so on), as reflected in running times and convergence rates that are exponentially bad in the dimension. This problem is all the more pressing as data sets get increasingly high dimensional. Recently the field has been rejuvenated in several ways, of which perhaps the most promising is the realization that a lot of real-world data which appears high-dimensional in fact has low "intrinsic" dimension in the sense of lying close to a low-dimensional manifold. In the past few years, there has been a huge interest in learning such manifolds from data, and then using the learned structure to transform the data into a lower-dimensional space where standard statistical methods generically work better. I'll exhibit a way to benefit from intrinsic low dimensionality without having to go to the trouble of explicitly learning its fine structure. Specifically, I'll present a simple variant of the ubiquitous k-d tree (a spatial data structure widely used in machine learning and statistics) that is provably adaptive to low dimensional structure. We call this a "random projection tree" (RP tree). Along the way, I'll discuss different notions of intrinsic dimension -- motivated by manifolds, by local statistics, and by analysis on metric spaces -- and relate them. I'll then prove that RP trees require resources that depend only on these dimensions rather than the dimension of the space in which the data happens to be situated. This is work with Yoav Freund (UC San Diego).

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Colloquia

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File