Logo: University of Southern California

Events Calendar



Select a calendar:



Filter December Events by Event Type:


SUNMONTUEWEDTHUFRISAT
29
30
2
4
5

6
7
8
9
11
12

13
14
15
16
17
18
19

20
21
22
23
24
25
26

27
28
29
30
31
1
2


Conferences, Lectures, & Seminars
Events for December

  • CS Colloq: Dr. Vazirani

    Tue, Dec 01, 2009 @ 04:00 PM - 05:50 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Prof. Vijay V. Vazirani, Georgia Tech
    Host: Prof. Shanghua Teng, Prof. David Kempe
    Title: Can Complexity Theory Ratify the "Invisible Hand of the Market"?Abstract:
    "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard for their own interest."
    Each participant in a competitive economy is "led by an invisible hand to promote an end which was no part of his intention." Adam Smith, 1776.With his treatise, The Wealth of Nations, 1776, Adam Smith initiated the field of economics, and his famous quote provided this field with its central guiding principle. The pioneering work of Walras (1874) gave a mathematical formulation for this statement, using his notion of market equilibrium, and opened up the possibility of a formal ratification.Mathematical ratification came with the celebrated Arrow-Debreu Theorem (1954), which established existence of equilibrium in a very general model of the economy; however, an efficient mechanism for finding an equilibrium has remained elusive.The question of algorithmic ratification was taken up in the earnest within theoretical computer science a decade ago, and attention soon gravitated on markets under piecewise-linear, concave utility functions.
    As it turned out, the recent resolution of this open problem did not yield the hoped-for mechanism; however, it did mark the end of the road for the current approach. It is now time to step back and plan a fresh attack, using the powerful tools of modern complexity theory and algorithms.After providing a summary of key developments through the ages and a gist of the recent results, we will discuss some ways of moving forward.(Based in part on recent work with Mihalis Yannakakis.)Speaker Bio:
    Vijay Vazirani got his Bachelor's degree from MIT in 1979, his Ph.D. from U.C. Berkeley in 1983, and is currently Professor of Computer Science at Georgia Tech.
    His research career has been centered around the design of algorithms, together with work on complexity theory, cryptography, coding theory, and game theory.He is best known for his work on efficient algorithms for the classical maximum matching problem (1980's), fundamental complexity-theoretic results obtained using randomization (1980's), approximation algorithms for basic NP-hard optimization problems (1990's), and efficient algorithms for computing market equilibria (current).In 2001 he published what is widely viewed as the definitive book on approximation algorithms.
    This book has been translated into Japanese, French and Polish, and Persian and Chinese translations are forthcoming. In 2005 he initiated work on a comprehensive volume on algorithmic game theory; the co-edited volume appeared in 2007.

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Front Desk

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Dr. Liang Huang

    Thu, Dec 03, 2009 @ 04:00 PM - 05:50 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Talk Title: Packed Forest for Parsing and Translation Speaker: Dr. Liang Huang Host: Prof. Paul Rosenbloom Abstract: A major challenge in natural language research is how to deal with ambiguity. This is because ambiguity grows *exponentially* with sentence length (for example, consider the number of interpretations of a 60-word long sentence). This suggests that the common practice of a k-best list largely under-represents the whole search space. So is there a better way to efficiently represent and exploit the vast amount of ambiguities in human languages? The answer is "packed forest", a polynomial-space data-structure for representing exponentially many trees in a compact form by sharing common substructures. In this talk, I will apply this forest idea to both machine translation and syntactic parsing. In both tasks, we show that working with a forest encoding millions of trees improves the state-of-the-art accuracies by considering many more alternatives. This results in the best parsing performance reported on the Penn Treebank. More interestingly, translating a forest of 10 millions
    trees is even faster than translating 30 individual trees, thanks to dynamic programming. (This talk is intended for a general CS audience.)
    Bio: Liang Huang is currently a Research Scientist at ISI. He received his PhD from the University of Pennsylvania in 2008, and worked as a Research Scientist at Google before moving back to ISI where he had two internships. He is mainly interested in the theoretical aspects of computational linguistics, in particular, efficient algorithms in parsing and machine translation, generic dynamic programming, and formal properties of synchronous grammars. His work received an Outstanding Paper Award at ACL 2008, and Best Paper Nominations at ACL 2007 and EMNLP 2008. He also loves teaching, and won a University
    Teaching Award at Penn in 2005. He is currently teaching CS 562, Empirical Methods in Natural Language Processing. http://www.isi.edu/~lhuang
    http://www.cis.upenn.edu/~lhuang3

    Location: Seaver Science Library (SSL) - 150

    Audiences: Everyone Is Invited

    Contact: CS Front Desk

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • CS Colloq: Geoffrey Hollinger

    Thu, Dec 10, 2009 @ 04:00 PM - 06:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Talk Title: Efficient, Guaranteed Search with Multiple Robots
    Speaker: Geoffrey A. Hollinger
    Host: Prof. Gaurav SukhatmeAbstract:
    Consider the problem of coordinating a team of robots to locate a target in an environment or to authoritatively say that one does not exist. Such a scenario may occur in urban search and rescue, military operation, and even aged care. The search must be robust (deal with robot failures), decentralized (reduce computational and communication bottlenecks), and reactive (make use of any pertinent information that becomes available during search). Prior methods in the literature would force you to make one of two assumptions in this scenario. Do you make the worst-case assumption and choose to treat the target as adversarial? The robots could then utilize graph search algorithms to guarantee finding the target, but the search might take an unnecessarily long time. Or do you decide to trust some non-adversarial model of the target? The robots could then optimize the search with respect to that model, but this approach would eliminate guarantees if the model is inaccurate. In this case, the target may avoid the robots entirely. However, it is possible to do better; how can we strike a balance between risky average-case search and conservative worst-case search?In our recent work we have developed a method that combines the two search paradigms described above to generate plans that clear an environment of a worst-case adversarial target and have good average-case performance considering a non-adversarial motion model.
    Our proposed algorithm takes advantage of spanning tree traversal methods along with receding horizon planning to generate a number of candidate search schedules. The resulting architecture is decentralized, scalable, and yields theoretically bounded average-case performance. We validate our algorithm through a number of experiments in simulation and on a team of robot and human searchers in an office building. In addition, I will discuss ongoing work on incorporating communication and connectivity constraints into the search schedules.Bio:
    Geoffrey Hollinger is a Ph.D. Candidate at Carnegie Mellon University in the Robotics Institute. He is currently interested in designing scalable and distributed algorithms for estimation and multi-robot coordination in the physical world. He has worked on personal robotics at Intel Research Pittsburgh, multi-robot active estimation at the University of Pennsylvania's GRASP Laboratory, and miniature inspection robots for the Space Shuttle at NASA's Marshall Space Flight Center. He received his M.S. in Robotics from Carnegie Mellon University in 2007 and his B.S. in General Engineering along with his B.A. in Philosophy from Swarthmore College in 2005.

    Location: Ronald Tutor Hall of Engineering (RTH) - 406

    Audiences: Everyone Is Invited

    Contact: CS Front Desk

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File