Logo: University of Southern California

Events Calendar



Select a calendar:



Filter May Events by Event Type:


SUNMONTUEWEDTHUFRISAT
8
9
10
12
13
14

15
16
17
18
19
20
21

22
24
25
26
27
28

29
31
2
3
4


Conferences, Lectures, & Seminars
Events for May

  • CS Colloquium: John Lafferty (University of Chicago) - Statistical Learning Under Communication and Shape Constraints

    Fri, May 06, 2016 @ 11:00 AM - 12:15 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: John Lafferty, University of Chicago

    Talk Title: Statistical Learning Under Communication and Shape Constraints

    Series: Yahoo! Labs Machine Learning Seminar Series

    Abstract: Imagine that I estimate a statistical model from data, and then want to share my model with you. But we are communicating over a resource constrained channel. By sending lots of bits, I can communicate my model accurately, with little loss in statistical risk. Sending a small number of bits will incur some excess risk. What can we say about the tradeoff between statistical risk and the communication constraints? This is a type of rate distortion and constrained minimax problem, for which we provide a sharp analysis in certain nonparametric settings. We also consider the problem of estimating a high dimensional convex function, and develop a screening procedure to identify irrelevant variables. The approach adopts on a two-stage quadratic programming algorithm that estimates a sum of one-dimensional convex functions, beating the curse of dimensionality that holds under smoothness constraints. Joint work with Yuancheng Zhu and Min Xu.

    Host: Yan Liu

    Location: Ronald Tutor Hall of Engineering (RTH) - 526

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Andrew Jones - Rendering for Automultiscopic Displays

    Wed, May 11, 2016 @ 03:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Andrew Jones, PhD Candidate

    Talk Title: Rendering for Automultiscopic Displays

    Abstract: Title: Rendering for Automultiscopic Displays

    Location: SAL 322

    Time: 3:00pm - 5:00pm, May 11th, 2016

    PhD Candidate: Andrew Jones

    Committee Members:
    Paul Debevec
    Mark Bolas (outside member)
    Jernej Barbiq

    Abstract:
    While a great deal of computer generated imagery is modelled and rendered in three dimensions, the vast majority of this 3D imagery is shown on two-dimensional displays. Various forms of 3D displays have been contemplated and constructed for at least one hundred years, but only recent advances in digital capture, computation, and display have made functional and practical 3D displays possible. In this thesis, I propose several designs that overcome some of the classic limitations of 3D displays. The displays are: autostereoscopic, requiring no special viewing glasses; omnidirectional, allowing viewers to be situated anywhere around it; and multiview, producing a correct rendition of the 3D objects with correct horizontal parallax and vertical perspective for any viewer around the display.

    The first display prototype utilizes a spinning anisotropic mirror to distribute frames from a high-speed video projector to different viewers. Unfortunately, as the size and mass of the mirror increases, it becomes increasingly difficult to maintain a stable and rapid rotation speed. The second 3D display form has no moving mechanical parts, provides interactive content, and scales to large format displays. The key insight is that a large array of closely stacked projectors aimed at a stationary anisotropic screen is optically equivalent to a single high-speed projector aimed at a rotating anisotropic screen. Both types of display utilize new algorithms based on geometry and light filed based rendering. Applications for these displays include life-size interactive virtual characters, 3D teleconferencing, and time-offset conversations with 3D subjects.


    Host: Andrew Jones

    Location: Henry Salvatori Computer Science Center (SAL) - 322

    Audiences: Everyone Is Invited

    Contact: Ryan Rozan

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Yurong Jiang - Crowd-Sourced Collaborative Sensing in Highly Mobile Environments

    Mon, May 23, 2016 @ 11:00 AM - 01:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Yurong Jiang, PhD Candidate

    Talk Title: Crowd-Sourced Collaborative Sensing in Highly Mobile Environments

    Abstract: Title: Crowd-Sourced Collaborative Sensing in Highly Mobile Environments

    Location: SAL 213

    Time: 11am-1pm, May 23rd, 2016

    PhD Candidate: Yurong Jiang

    Committee Members:
    Ramesh Govindan (chair)
    Bhaskar Krishnamachari (outside member)
    Gaurav Sukhatme


    Abstract:

    Networked sensing has revolutionized various aspects of our lives. In particular, it has allowed us to minutely quantify many aspects of our existence: what we eat, how we sleep, how we use our time, and so forth. We have seen such quantification from the smart devices we use daily, such as smartphones and wearable devices. Those smart devices usually have more than ten high precision sensors to sense both internal and external information. Another domain that will likely to see such quantification in near future is automobiles. Modern vehicles are equipped with several hundred sensors that govern the operation of internal vehicular subsystems. Those sensors from both smart devices and automobiles, coupled with online information (cloud computing, maps, traffic, etc.) and other databases as well as crowd-sourced information from other users, can enable various forms of context sensing, and can be used to design new features for both mobile devices and vehicles. We abstract those aspects for context sensing into three parts: mobile and vehicular sensing, cloud assistance and crowdsourcing. Though each part itself comes with different challenges, accurate context sensing usually requires a careful combination of one or more of the three aspects, which brings new challenges for designing and developing context sensing systems. In this dissertation, we focus on three challenges, Programmability, Accuracy and Timeliness, in designing efficient and accurate context sensing system for mobile devices and vehicles. We will leverage the mobile and vehicle sensors, cloud information and crowdsourcing, collectively to ease context sensing programming, improve context sensing accuracy and timeliness.

    First, for Programmability, we focus on programming context descriptions using information from cloud and vehicle sensors. As more sensor-based apps are developed for vehicular platforms, we think many of these apps will be programmed using an event-based paradigm, where apps try to detect events and perform actions on detection. However, modern vehicles have several hundred sensors, these sensors can be combined in complex ways together with cloud information in order to detect some complicated context, e.g. dangerous driving. Moreover, these sensor processing algorithms may incur significant costs in acquiring sensor and cloud information. Thus, we propose a programming framework called CARLOG to simplify the task of programming these event detection algorithms. CARLOG uses Datalog to express sensor processing algorithms, but incorporates novel query optimization methods that can be used to minimize bandwidth usage, energy or latency, without sacrificing correctness of query execution. Experimental results on a prototype show that CARLOG can reduce latency by nearly two orders of magnitude relative to an unoptimized Datalog engine.

    Second, for Accuracy, we focus on automotive positioning accuracy. Positioning accuracy is an important factor for all kinds of context sensing applications for automobiles. Lane-level precise positioning of an automobile can improve navigation experience and on-board application context awareness. However, GPS by itself cannot provide such precision in obstructed urban environments. We propose a system called CARLOC for lane-level positioning of automobiles which carefully incorporates the three aspects in context sensing. CARLOC uses three key ideas in concert to improve positioning accuracy: it uses digital maps to match the vehicle to known road segments; it uses vehicular sensors to obtain odometry and bearing information; and it uses crowd-sourced location estimates of roadway landmarks that can be detected by sensors available in modern vehicles. CARLOC unifies these ideas in a probabilistic position estimation framework, widely used in robotics, called the sequential Monte Carlo method. Through extensive experiments, we show our system achieves sub-meter positioning accuracy even in obstructed environment, which is an order of magnitude improvement over a high-end GPS device.

    Finally, for context sensing applications, Timeliness is another important problem we need to take care of. We consider how to ensure the timeliness and availability of media content from mobile devices. Motivated by an availability gap for visual media, where images and videos are uploaded from mobile devices well after they are generated, we explore the selective, timely retrieval of media content from a collection of mobile devices. We envision this capability being driven by similarity-based queries posed to a cloud search front-end, which in turn dynamically retrieves media objects from mobile devices that best match the respective queries within a given time limit. We design and implement a general crowdsourcing framework called MediaScope that supports various geometric queries and contains a novel retrieval algorithm to maximize the retrieval of relevant information. In experiments on a prototype, our system achieves near optimal performance under different scenarios.


    Host: Yurong Jiang

    Location: Henry Salvatori Computer Science Center (SAL) - 213

    Audiences: Everyone Is Invited

    Contact: Ryan Rozan

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Seyed Jalal Kazemitabar Amirkolaei - "Scalable Processing of Spatial Queries"

    Mon, May 30, 2016 @ 01:30 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Seyed Jalal Kazemitabar Amirkolaei, PhD Candidate

    Talk Title: Scalable Processing of Spatial Queries

    Abstract: In recent years, geospatial data have been produced in mass e.g., through billions of smartphones and wearable devices. Current exponential growth in data generation by mobile devices on the one hand, and the rate and complexity of recent spatial queries on the other hand, highlights the importance of scalable query processing techniques. Traditional database technology, which operates on centralized architectures to process persistent and less dynamic spatial objects does not meet the requirements for scalable geospatial data processing.

    In this thesis, we specifically focus on two primary challenges in scaling spatial queries, i.e., the communication and computation costs, while guaranteeing the correctness of query results. We utilize techniques such as batch processing and use of parallelized framework to address these challenges.

    We address the location tracking cost towards achieving scalability in communication-intensive queries. The location tracking cost between the moving objects and the query processing server is a key factor in processing many moving object continuous queries. The challenge is that increasing the number of queries and objects would require frequent location updates which results in draining the battery power on mobile devices. Thus, existing approaches would not scale unless query correctness is compromised. In this thesis, we propose batch processing of spatial queries as a method to optimize the location tracking cost to scale to large numbers of queries and objects without either compromising the query correctness or using excessive battery power. In our approach, the queries are categorized into independent groups and then processed in parallel. We specifically apply our approach to the proximity detection query and optimize the communication cost while processing millions of queries.

    Processing some spatial queries has become more resource-intensive in recent years. This is due to various reasons such as the introduction of queries that are more computationally complex compared to the classic ones, as well as an increase in the input size (e.g., the number of GPS-enabled devices). In this thesis, we propose optimized algorithms and utilize MapReduce to process a complex spatial problem, i.e., the Multi-Criteria Optimal Location (MCOL) problem. First, we formalize it as a Maximal Reverse Skyline (MaxRSKY) query. For the first time, we present an optimized solution that scales to millions of objects over a cluster of MapReduce nodes. Specifically, rather than batch processing the query which is typical of a MapReduce solution, we first partition the space and run a precomputation phase where we identify potential regions hosting the optimum solution, and then load balance the regions across the Reducers in a dynamic way to reduce the total execution time.


    Host: Seyed Jalal Kazemitabar Amirkolaei

    Location: Henry Salvatori Computer Science Center (SAL) - 213

    Audiences: Everyone Is Invited

    Contact: Ryan Rozan

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File