Select a calendar:
Filter February Events by Event Type:
Conferences, Lectures, & Seminars
Events for February
-
CS Colloquia: Towards a Visually-Guided Semi-Autonomous Wheelchair for the Disabled
Tue, Feb 05, 2008 @ 01:30 PM - 03:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Towards a Visually-Guided Semi-Autonomous Wheelchair for the DisabledSpeaker: Prof. John K. Tsotsos (York)ABSTRACT:
An intelligent, autonomous wheelchair for the disabled has been the dream of
many for some time.
Yet, the dream seems to still be very distant. In part, the role and utility
of vision seems to not have
reached its full potential in this application. I will describe a
long-standing project we affectionately call
Playbot whose goal is to develop a purely visually-guided wheelchair with
manipulator that would assist
a child. Most of the functionality easily translates to assistance for a
broader population. I will present an
overview of the project with a focus on several vision-based components
including active visual object
search, mapping, and doorway behavior. Video will demonstrate many of these
functions. There is
much to do particularly in integration and a preview of a control architecture
for this purpose will be given.
As a general goal, we seek to understand the role of vision, as a primary
sense, in autonomous assistive agents.
This project, framed against this ambitious goal, hopes to make a few small
steps towards the dream.BIO:
John K. Tsotsos received an honours undergraduate degree in Engineering
Science in 1974 from the University of Toronto and continued at the University
of Toronto to complete a Master's degree in 1976 and a Ph.D. in 1980 both in
Computer Science. He was on the faculty in Computer Science and in Medicine at
the University of Toronto from 1980 - 1999, where he founded and led the
Computer Vision Research Group. In 2000 he moved to York University in Toronto
where he is currently Professor in the Dept. of Computer Science &
Engineering. He was Director of York's Centre for Vision Research, 2000 -2006. He holds the NSERC Tier I Canada Research Chair in Computational Vision
and is an Adjunct Professor in both the departments of Ophthalmology and of
Computer Science at the University of Toronto. He was a Fellow in the
Artificial Intelligence and Robotics program of the Canadian Institute for
Advanced Research from 1985 - 95, has several conference papers that received
recognition, was awarded the 2006 Canadian Image Processing and Pattern
Recognition Society Award for Research Excellence and Service, and is part of
the ACM Distinguished Speaker Program for 2007-08.Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloquia: Computing Equilibria in Games
Tue, Feb 05, 2008 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Computing Equilibria in GamesSpeaker: Constantinos Daskalakis (UC Berkeley)ABSTRACT:
Game Theory is important for the study of large competitive environments, such
as the Internet, the market, and even social and biological systems. A key
tool in analyzing such systems (games) is the study of their stable states,
that is, their equilibria. Understanding the properties of equilibria can give
insights into the effectiveness of economic policies, engineering decisions,
etc. However, due to the large scale of most interesting games, the problem of
computing equilibria cannot be separated from complexity considerations.
Motivated by this challenge, I will discuss the problem of computing
equilibria in games.I will show first that computing a Nash equilibrium is an intractable problem.
It is not NP-complete, since, by Nash's theorem, an equilibrium is always
guaranteed to exist, but it is at least as hard as solving any fixed point
computation problem, in a precise complexity-theoretic sense.In view of this hardness result, I will present algorithms for computing
approximate equilibria. In particular, I will describe algorithms that achieve
constant factor approximations for 2-player games, and give a quasi-polynomial
time approximation scheme for the multi-player setting.Finally, I will consider a very natural and important class of games termed
anonymous games. In these games every player is oblivious to the identities of
the other players; examples arise in auction settings, congestion games, and
social phenomena. I will introduce a polynomial time approximation scheme for
the anonymous setting and provide surprising connections to Stein's method in
probability theory.BIO:
Constantinos (or Costis) Daskalakis grew up in Athens, Greece, where he
received his undergraduate degree in Electrical and Computer Engineering from
the National Technical University of Athens. In 2004 he moved to California to
pursue a Ph.D. in Computer Science at U.C. Berkeley under the supervision of
Professor Christos H. Papadimitriou. Costis¡¦s work has focused on
computational game theory and applied probability, in particular the
computation of equilibria in games, the study of social networks, and
computational problems in biology. His research is motivated by two questions:
"How does the algorithmic perspective influence economics, biology, physics,
and the social sciences?" And, "how does the study of computational problems
arising from areas outside computer science transform the theory of
computation?"Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: Secure Web Applications and Expressive Security Policies
Thu, Feb 07, 2008 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Secure Web Applications and Expressive Security PoliciesSpeaker: Stephen Chong (Cornell)ABSTRACT:
Information-flow control promises strong, end-to-end security. In this talk,
I'll present two recent projects that make programming with information-flow
control more practical: a new way of writing secure web applications, and a
framework for expressive security policies.Swift is a new, principled approach to building web applications that are
secure by construction. Swift automatically partitions application code while
providing assurance that the resulting placement of code and data on client
and server is secure and efficient. Application code is written as Java-like
code, annotated with information flow policies that specify the
confidentiality and integrity of information. Using these policies, the
compiler partitions a web application into JavaScript code to run on the
client, and Java code to run on the server. Code and data are placed to ensure
that the specified policies are obeyed, and also to provide good interactive
performance. However, security critical code and data are always placed on the
server. Swift makes it easier to write secure web applications: the programmer
uses just one language, and does not need to worry about the secure or
efficient placement of code and data.Computer systems often have detailed and complicated information security
requirements, perhaps derived from legislation, or organizational policy.
However, it is difficult to ensure that these requirements are correctly
enforced in a system's implementation. We have developed a framework for
specifying, reasoning about, and enforcing, two common requirements:
declassification and erasure. Declassification occurs when the confidentiality
of information is weakened, for example, allowing more people to read. Erasure
is the opposite, and occurs when confidentiality is strengthened, for example,
allowing fewer people to read, perhaps removing the information from the
system entirely. The framework's policies specify when declassification may
occur, and when erasure must occur. A security-type system, in conjunction
with a trusted runtime system, ensures that the policies are enforced. We have
used the policies to implement a secure remote voting service, giving
increased assurance that the voting service satisfies its information security
requirements.BIO:
Stephen Chong is a Ph.D. candidate at Cornell University, in Ithaca, NY, where
he is advised by Andrew Myers. Steve's research focuses on language-based
security and programming languages. He received a bachelor's degree from
Victoria University of Wellington, New Zealand, and plans to complete his
doctorate by May 2008.Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: The Impact of Research on the Development of Middleware Technology
Thu, Feb 14, 2008 @ 11:00 AM - 12:30 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: The Impact of Research on the Development of Middleware TechnologySpeaker: Professor Wolfgang EmmerichABSTRACT:
The middleware market represents a sizable segment of the overall Information
and Communication Technology market. In 2005, the annual middleware license
revenue was reported by Gartner to be in the region of
8.5 billion US Dollars. In this talk we address the question whether research
had any involvement in the creation of the technology that is being sold in
this market? We attempt a scholarly discourse. We present the research method
that we have applied to answer this question. We then present a brief
introduction into the key middleware concepts that provide the foundation for
this market. It would not be feasible to investigate any possible impact that
research might have had. Instead we select a few very successful technologies
that are representative for the middleware market as a whole and show the
existence of impact of research results in the creation of these technologies.
We investigate the origins of web services middleware, distributed transaction
processing middleware, message oriented middleware, distributed object
middleware and remote procedure call systems. For each of these technologies
we are able to show ample influence of research and conclude that without the
research conducted by PhD students and researchers in university computer
science labs at Brown, CMU, Cambridge, Newcastle, MIT, Vrije, and University
of Washington as well as research in industrial labs at APM, AT&T Bell Labs,
DEC Systems Research, HP Labs, IBM Research and Xerox PARC we would not have
middleware technology in its current form. We summarise by distilling lessons
that can be learnt from this evidenced impact for future technology transfer
undertakings.BIO:
Wolfgang Emmerich holds the Chair in Distributed Computing in the Department
of Computer Science at University College London. He is Director of Research
in the Dept. of Computer Science. He received his undergraduate degree in
Informatics from the University of Dortmund in 1990 and went on to conduct
research into process-centred software engineering environments. He received a
PhD in Computer Science from University of Paderborn in 1995. After a brief
post-doctoral appointment at the Software Verification Research Centre of the
University of Queensland in Brisbane, he joined The City University as a
Lecturer in 1996. He was appointed as a Lecturer at UCL in the Department of
Computer Science in 1997 and co-founded the Software Systems Engineering
Research Group, which he currently heads. He is a member of the ACM SIGSOFT
Impact project (see http://www.acm.org/sigsoft/impact) where the work
described here was conducted. In parallel to his academic career, he worked
for the Central European OMG representation on the CORBA middleware
specification and co-founded three start-up companies. He is a co-founder,
partner and non-executive director of the Zuhlke Technology Group.Location: Seeley G. Mudd Building (SGM) - 123
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: New Primitives and Metrics for Distributed Systems
Tue, Feb 19, 2008 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: New Primitives and Metrics for Distributed SystemsSpeaker: Dr. Byung-Gon Chun (ICSI)ABSTRACT:
With the advent of data centers and "cloud computing", distributed
systems are becoming much larger and far more sophisticated, with
computation spread over thousands of hosts and complex execution
paths. In this talk I will discuss new approaches to securing and
understanding these complex systems.I will first describe how we can build more robust systems using a new
trusted primitive called Attested Append-Only Memory (A2M). We trade
off assumptions on trusted components for improved Byzantine fault
bounds of safety and liveness. I will then present a way of
characterizing the complexity of general networked systems. I will
describe a metric based on distributed state dependencies, and apply
it to routing and classical distributed systems.BIO:
Byung-Gon Chun is a postdoctoral researcher at the International
Computer Science Institute, funded by Intel Corporation. He received
his Ph.D. in Computer Science in 2007 from the University of
California at Berkeley. His research interests span distributed
systems and networks with emphasis on fault tolerance, security,
complexity, and system troubleshooting.Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: Data-Driven Grasping and Manipulation
Tue, Feb 26, 2008 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Data-Driven Grasping and ManipulationSpeaker: Prof. Nancy Pollard (CMU)ABSTRACT:
ata captured from human performances of activities ranging from the everyday
through the extraordinary has become widely accessible over the past 10 years.
The ability to download or capture human motion and process it in real-time
has led to many new algorithms and new ways of thinking about character
animation and robot control. However, we do not yet know how to make the most
effective use of this data. What is important about a given performance? How
can it be modified to create realistic new scenarios? And what are the limits
of this approach. Can we ever create behavior that could be called dexterous
from a collection of observed performances?In this talk, I will focus on the problem of creating dexterous grasping and
manipulation behaviors from observed performances. I will discuss how my ideas
have changed over the past decade, as we have gone from the idea that a grasp
is made up of contact points between the hand and object through consideration
of the hand geometry, anatomical constraints, and dynamic properties to the
observation that grasps often involve preparatory sensing and manipulation
actions which we have shown can reduce the effort needed to acquire an object.
Results in computer animation and robot control, as well as results from
controlled human subjects experiments will be presented.BIO:
Nancy Pollard is an Associate Professor in the Robotics Institute and Computer
Science Department at Carnegie Mellon University. She received her PhD in
Electrical Engineering and Computer Science from the MIT Artificial
Intelligence Laboratory in 1994, where she performed research on grasp
planning for articulated robot hands. Before joining CMU, Nancy was an
Assistant Professor and part of the Computer Graphics Group at Brown
University. She received the NSF CAREER award in 2001 for research on
'Quantifying Humanlike Enveloping Grasps' and the Okawa Research Grant in 2006
for "Studies of Dexterity for Computer Graphics and Robotics."Location: Grace Ford Salvatori Hall Of Letters, Arts & Sciences (GFS) - 220
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: Apprenticeship Learning
Tue, Feb 26, 2008 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Apprenticeship LearningSpeaker: Pieter Abbeel (Stanford)ABSTRACT:
Machine learning is a powerful paradigm which enables autonomous
decision making by learning from examples. Despite its successes,
human learning and decision making still vastly outperform autonomous
decision making, particularly for complex sequential decision making
tasks, where decisions made now have great ramifications far into the
future. In this talk, I will present machine learning techniques with
formal performance guarantees that efficiently learn to perform well
in the apprenticeship learning setting---the setting when expert
demonstrations of the (sequential decision making) task are available.
I will also describe how my apprenticeship learning techniques have
enabled us to solve real-world problems that could not be solved
before. For example, they have enabled a helicopter to perform by far
the most challenging aerobatic maneuvers performed by any autonomous
helicopter to date. They have also enabled us to learn an autonomous
controller for a quadruped robot to traverse challenging terrains and
to learn a variety of different driving behaviours in our highway
driving simulator.BIO:
Pieter Abbeel is a Ph.D. candidate in the Computer Science
Department at Stanford University. His research focuses on machine
learning, including both the foundations of learning, and its practical
application to problems in text mining, computer vision, control,
computational biology, graphics, and computer systems.Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: Modeling Human Behavior for Defense against Flash-Crowd Attacks
Wed, Feb 27, 2008 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Modeling Human Behavior for Defense against Flash-Crowd AttacksSpeaker: Dr. Jelena Mirkovic (ISI)ABSTRACT:
Flash-crowd attacks are the most vicious form of distributed denial
of service (DDoS). They flood the victim with service requests
generated from numerous bots. Attack requests are identical in
content to those generated by legitimate, human users, and bots send
at a low rate to appear non-aggressive --- these features defeat many
existing DDoS defenses. We propose defenses against flash-crowd
attacks via human behavior modeling, which differentiate bots from
human users. Current approaches to human-vs-bot differentiation, such
as graphical puzzles, are insufficient and annoying to users, whereas
our defenses are highly effective and transparent to humans. We have
developed three types of human behavior models: a) request dynamics
models learn several features of human interaction dynamics, and
detect bots that exhibit higher aggressiveness in one or more of
these features, b) request sequence models learn visit and
transitional probabilities of user requests; they detect bots that
generate valid but low-probability sequences, and c) deception
techniques embed human-invisible objects into server replies, and
flag users that visit them as bots. Our techniques raise the bar for
a successful attack to a botnet size that is accessible to less than
5%, and sometimes less than 1%, of attackers today.BIO:
Dr. Jelena Mirkovic is a computer scientist at USC/ISI, which she
joined in 2007. Previously she was an assistant professor at the
University of Delaware, 2003-2007.
She received her M.S. and Ph.D. from UCLA, and her B.S. in Computer
Science and Engineering from the School of Electrical Engineering,
University of Belgrade, Serbia. Her current research is focused on:
methodologies for security experimentation, computer worms and viruses,
denial-of-service attacks, and IP spoofing.Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: Internet Equilibrium Analysis Through Separation of User and Network Behavior
Thu, Feb 28, 2008 @ 11:00 AM - 12:30 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Internet Equilibrium Analysis Through Separation of User and Network
BehaviorSpeaker: Prof. Y.C. Tay (National University of Singapore)ABSTRACT:
Internet complexity makes reasoning about traffic equilibrium difficult, partly
because users react to congestion. This difficulty calls for an analytic
technique that is simple, yet have enough details to capture user behavior and
flexibly address a broad range of
issues.This talk presents such a technique. It treats traffic equilibrium as a balance
between an inflow controlled by user demand, and an outflow provided by network
supply (link capacity, congestion avoidance, etc.). This decomposition is
demonstrated with a surfing session model, and validated with a traffic trace
and NS2 simulations.The technique's accessibility and breadth are illustrated through an analysis
of several issues concerning the location, stability, robustness and dynamics
of traffic equilibrium.(Joint work with D. Nguyen Tran, Eric Y. Liu, Wei Tsang Ooi and Robert Morris)BIO:
Y.C. Tay received his B.Sc. degree from the University of Singapore and Ph.D.
degree from Harvard University. He is a professor in the Departments of
Mathematics and Computer Science at the National University of Singapore
(http://www.comp.nus.edu.sg/~tayyc). His main research interest is performance
modeling.Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: CS Colloquia
-
CS Colloq: Fitting Polynomials to Noisy Data
Thu, Feb 28, 2008 @ 03:30 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Title: Fitting Polynomials to Noisy DataSpeaker: Dr. Parikshit Gopalan (Washington)ABSTRACT:
The problem of finding the polynomial that best fits a noisy data-set (or
polynomial reconstruction) has a long history, dating back to curve-fitting
problems studied in the 1800s. In the last two decades, there has been
tremendous progress on this problem in computer science, driven by the
discovery of powerful new algorithms. These results have spurred exciting new
developments in Coding theory, Computational learning, Cryptography and
Hardness of Approximation. In this talk, we will explore this problem from the
perspectives of Coding theory and Computational learning.We begin with an algorithm for decoding a well-studied family of binary
error-correcting codes called Reed-Muller codes, which are obtained from
low-degree polynomials. The salient feature of this algorithm is that it works
even when the number of errors far exceeds the so-called Johnson bound.I will present an algorithm for agnostically learning decision trees under the
uniform distribution. This is the first polynomial time algorithm for learning
decision trees in a harsh noise model. This algorithm solves the
reconstruction problem for real polynomials using tools from convex
optimization.I will also discuss settings where the reconstruction problem seems
intractable. We will see evidence that the notorious Noisy Parity problem is
hard under the uniform distribution. We will see hardness results suggesting
that learning simple concepts with noise is impossible for arbitrary
distributions.BIO:
Parikshit Gopalan grew up in India in the city of Bombay (now called Mumbai).
He graduated with an undergraduate degree from IIT-Bombay (whose name,
thankfully, has not changed). He received his Ph.D from Georgia Tech in August
2006, under the guidance of Dick Lipton. Following this, he did a short stint
as a postdoctoral researcher at the University of Texas at Austin. He is
currently a postdoc at the University of Washington, visiting Princeton
University.His research focuses on theoretical computer science, especially on algebraic
problems arising from algorithms and complexity. He also likes to dabble in
other areas such as Data-stream algorithms and Communication complexity.Location: Seaver Science Library (SSL) - 150
Audiences: Everyone Is Invited
Contact: CS Colloquia