Logo: University of Southern California

Events Calendar



Select a calendar:



Filter April Events by Event Type:



Receptions & Special Events
Events for April

  • USC Robotics Open House

    USC Robotics Open House

    Thu, Apr 11, 2013 @ 10:00 AM - 04:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Robotics faculty, postdocs, and students are proud to host the annual USC Robotics Open House on April 11, 2013 (10 am - 4 pm).

    The open house will be held on the 4th floor of RTH and in HNB room 10. Please visit and learn what is new in robotics@usc.

    Center for Robotics and Embedded Systems (CRES) was established in fall 2002. It is an interdisciplinary organized research unit (ORU) in the USC Viterbi School of Engineering that focuses on the science and technology of effective, robust, and scalable robotic systems, with broad and far-reaching applications. CRES facilitates interdisciplinary interactions and collaboration through its robotics faculty and its large team of interdisciplinary affiliates and serves as a linch pin for strategic research areas at USC. CRES projects span the areas of service, humanoid, distributed, reconfigurable, space, and nano robotics and impact a broad spectrum of applications, including assistance, training and rehabilitation, education, environmental monitoring and cleanup, emergency response, homeland security, and entertainment. The Center provides a tight-knit foundation for collaboration and opportunities for education and outreach.

    CRES welcomes participation and new members. To get information on how to get involved, look here.

    For more information, please email to cresrobotics.usc.edu.

    The leadership of CRES consists of:

    Maja J Mataric', Founding Director
    Ari Requicha, Associate Director
    Stefan Schaal, Associate Director
    Wei-Min Shen, Associate Director
    Gaurav Sukhatme, Associate Director

    More Information: 2013_02_12_Open_house_ad.pdf

    Location: Ronald Tutor Hall of Engineering (RTH) - 4th Floor

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Na Chen

    Mon, Apr 15, 2013 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events



    PhD Candidate: Na Chen

    Committee members:
    Viktor K. Prasanna (chair)
    Dennis McLeod
    Raghu Raghavendra

    Time: April 15 1pm-3pm
    Location: EEB110

    Title: Understanding Semantic Relationships between Data Objects

    Abstract:
    Semantic Web technologies are a standard, non-proprietary set of languages and tools that enable modeling, sharing, and reasoning about information. Words, terms and entities on the Semantic Web are connected through meaningful relationships, and thus enable a graph representation of knowledge with rich semantics (also known as an ontology). Understanding the semantic relationships between data objects has been a critical step towards getting useful semantic information for better integration, search and decision-making. This thesis addresses the problem of semantic relationship understanding from two aspects: first, given an ontology schema, an automatic method is proposed to understand the semantic relationships between image objects using the schema as a useful semantic source; second, given a large ontology with both schema and instances, a learning-to-rank based ranking system is developed to identify the most relevant semantic relationships according to user preferences from the ontology .

    The first part of this thesis presents an automatic method for understanding and interpreting the semantics of unannotated web images. We observe that the relations between objects in an image carry important semantics about the image. To capture and describe such semantics, we propose Object Relation Network (ORN), a graph model representing the most probable meaning of the objects and their relations in an image. Guided and constrained by an ontology, ORN transfers the rich semantics in the ontology to image objects and the relations between them, while maintaining semantic consistency (\eg, a soccer player can kick a soccer ball, but cannot ride it). We present an automatic system which takes a raw image as input and creates an ORN based on image visual appearance and the guide ontology. Our system is evaluated on a dataset containing over 26,000 web images. We demonstrate various useful web applications enabled by ORNs, such as automatic image tagging, automatic image description generation, image search by image, and semantic image clustering.

    In the second part of this thesis, a learning-to-rank based ranking system is proposed for mining complex relationships on the Semantic Web. Our objective is to provide an effective ranking method for complex relationship mining, which can 1) automatically personalize ranking results according to user preferences, 2) be continuously improved to more precisely capture user preferences, and 3) hide as many technical details from end users as possible. We observe that a user’s opinions on search results carry important information regarding his interests and search intentions. Based on this observation, our system supports each user to give simple feedback about the current search results, and employs a machine-learning based ranking algorithm to learn the user’s preferences from his feedback. A personalized ranking function is then generated and used to sort the results of each subsequent query by the user. The user can keep teaching the system his preferences by giving feedback through several iterations until he is satisfied with the search results. Our system is evaluated on a large RDF knowledge base created from Freebase linked-open-data. The experimental results demonstrate the effectiveness of our method compared with the state-of-the-art.

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 110

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Jan Prokaj

    Fri, Apr 19, 2013 @ 04:00 PM - 05:30 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Ph.D. candidate: Jan Prokaj
    Time: 4:00pm to 5:30pm, April 19, 2013
    Location: GFS104
    Committee:
    Gerard Medioni (chairman)
    Ramakant Nevatia
    Shrikanth Narayanan
    Title: Exploitation of Wide Area Motion Imagery

    Abstract:

    Current digital photography solutions now routinely allow the capture of tens of megapixels of data at 2 frames per second. At these resolutions, a geographic area covering a whole city can be captured at once from an unmanned aerial vehicle (UAV), while still allowing the recognition of vehicles and people (for sensors under development). This fact, in tandem with the availability of increased computational power, has led to the growth of wide area motion imagery (WAMI).

    Our objective is to develop algorithms that automatically process the imagery of interest and turn it into a more useful, informative form. This more informative form can exist at different levels of semantics, from low-level to high-level. Therefore, the set of algorithms we propose operates in range from low-level processing to high-level processing.

    WAMI data is often captured by an array of cameras. Therefore, at the lowest level, we need an algorithm that takes an array of individual camera images and estimates a high quality mosaic. We propose a piecewise affine model to handle all image deformations that deviate from the standard pinhole camera model.

    The next level of processing involves estimating the trajectories of all moving objects, or ``tracking.'' We propose a tracking algorithm that optimally infers short tracks using Bayesian networks. These tracklets are then integrated into a multi-object tracking algorithm that achieves good performance on aerial surveillance video. When coupled with a regression-based tracker, stopping targets can be handled.

    WAMI is often collected over urban areas, where there are tall buildings, and other structures, which cause severe occlusion that in turn causes significant track fragmentation. To solve this problem, we propose a method which links fragmented tracks using known 3D scene structure.

    In order to enable large scale semantic analysis of WAMI data, higher level algorithms that determine at least some of the semantics are necessary. We propose a framework based on the Entity Relationship Model that is able to recognize a large variety of activites on real data as well as GPS tracks.

    When very high resolution data are available, such as from high-definition cameras on the ground, we want to infer even more semantics from video data. Under these circumstances, we propose an algorithm for vehicle classification that works with arbitrary vehicle pose.

    Location: Grace Ford Salvatori Hall Of Letters, Arts & Sciences (GFS) - 104

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File