Select a calendar:
Filter May Events by Event Type:
University Calendar
Events for May
-
New York Is Like Johannesburg: Comparative Imaginations of South Africa and the U.S. A Concert and Conversation featuring Jean Grae
Thu, May 01, 2014 @ 07:00 PM - 09:00 PM
USC Viterbi School of Engineering
University Calendar
RSVP TO: http://web-app.usc.edu/ws/eo2/calendar/113/event/903812
In 1994, Nelson Mandela became the first president of South Africa elected under universal suffrage. The end of the violent segregationist policy of apartheid two years prior had launched an era of new political possibilities. In celebration of the twenty-year anniversary of Mandelaââ¬â¢s election, a panel and concert will consider the current condition of both South Africa and the United States, highlighting ongoing global struggles for an end to police abuse and labor suppression. Hip hop emcee Jean Grae will join Mazibuko K. Jara, journalist-activist and founder of South Africaââ¬â¢s Amandla magazine and Brian Ashley of the Alternative Information Development Centre in Cape Town and co-founder and editor of Amandla,as well as U.S. historians Robin D. G. Kelley and Johanna Fernandez to discuss contemporary cultural and political conditions shared between two nations whose struggles for civil and human rights modeled the ambitions of a majority world.
About the Participants:
Spawned from two super musically gifted parents, Jean Graeââ¬â¢s powers manifested at an early age. She studied at the LaGuardia Performing Arts School before majoring in music business at New York University. Feeling enveloped by mainstream mediocrity, she went out in search of others with abilities like hers, first under the moniker ââ¬ÅWhat? What?ââ¬Â as a member of the indie group Natural Resource, providing classic singles such as ââ¬ÅBaseballââ¬Â and ââ¬ÅBum Deal,ââ¬Â then with her solo efforts: Attack of the Attacking Things, This Week, The Bootleg of the Bootleg, the 9th Wonderâ⬓produced Jeanius and, most recently, the mixtape Cookies or Comas. Grae has been featured on tracks with Pharoahe Monch, Talib Kweli, The Roots, Wale, Lil B the BasedGod, Phonte, Joell Ortiz and a long list of others. Also a producer, writer and director, Grae is currently at work on a sitcom entitled Life with Jeannie. (Facebook, Twitter)
Brian Ashley is a staff member with the Alternative Information Development Centre in Cape Town and co-founder and editor of Amandla, a bimonthly South African magazine founded in 2006.
Johanna Fernandez is a native New Yorker and assistant professor of history at Baruch College at the City University of New York. She teaches twentieth-century U.S. history, the history of social movements, the political economy of American cities and African American history. Her forthcoming book, tentatively entitled When the World Was Their Stage: A History of the Young Lords Party, 1968â⬓1974, discusses the Young Lords Party, the Puerto Rican counterpart to the Black Panther Party.
Mazibuko K. Jara, M. Phil., is a founder of Amandla magazine, Executive Director of Ntinga Ntaba kaNdoda (a community-owned development institution in South Africa), and a founder of the Treatment Action Campaign (HIV/AIDS treatment group) and the National Coalition for Gay and Lesbian Equality, which fought for and won the inclusion of sexual orientation as a ground for non-discrimination in the 1996 Constitution of South Africa. He is also a research associate with the University of Cape Town's Centre for Law and Society. He was previously Deputy National Secretary of Young Communist League, and later, the national spokesperson and chief strategist of the South Africa Communist Party from February 2000 to April 2005. He is currently active in the political organizations, Democratic Left Front and Democracy from Below.
Robin D. G. Kelley is the Gary B. Nash Professor of American History at UCLA. Kelleyââ¬â¢s research and teaching interests range widely, covering the history of labor and radical movements in the U.S. and the African diaspora, intellectual and cultural history (particularly music and visual culture), urban studies and transnational movements. His books include the prize-winning Thelonious Monk: The Life and Times of an American Original; Africa Speaks, America Answers: Modern Jazz in Revolutionary Times; Hammer and Hoe: Alabama Communists During the Great Depression; Race Rebels: Culture, Politics and the Black Working Class; and Yoââ¬â¢ Mamaââ¬â¢s DisFunktional!: Fighting the Culture Wars in Urban America.
Shana L. Redmond is assistant professor of American studies and ethnicity at USC. She is the recipient of numerous awards and fellowships and the author of the book Anthem: Social Movements and the Sound of Solidarity in the African Diaspora, which examines the sonic politics performed amongst and between organized Afro-diasporic publics in the twentieth century.
Organized by Shana L. Redmond (American Studies and Ethnicity).
For further information on this event:
visionsandvoices@usc.eduLocation: Ronald Tutor Campus Center (TCC) - Grand Ballroom, Ronald Tutor Campus Center
Audiences: Everyone Is Invited
Contact: Visions and Voices
-
PhD Defense - Prithviraj Banerjee
Fri, May 02, 2014 @ 10:30 AM - 12:30 PM
Thomas Lord Department of Computer Science
University Calendar
Ph.D. Candidate: Prithviraj Banerjee
Title: Incorporating Aggregate Feature Statistics in Structured Dynamical Models for Human Activity Recognition
Date: Friday, May 2nd, 2014
Time: 10:30AM
Location: PHE 223
Committee:
Ram Nevatia (Chair)
Gerard Medioni
C. -C. Jay Kuo (outside member)
Abstract:
Human action recognition in videos is a central problem of computer vision, with numerous applications in the fields of video surveillance, data mining and human computer interaction. There has been considerable research in classifying pre-segmented videos into a single activity class, however there has been comparatively less progress on activity detection in un-segmented and un-aligned videos containing medium to long term complex events. Our objective is to develop efficient algorithms to recognize human activities in monocular videos captured from static cameras in both indoor and outdoor scenarios. Our focus is on detection and classification of complex human events in un-segmented continuous videos, where the top level event is composed of primitive action components, such as human key-poses or primitive actions. We assume a weakly supervised setting, where only the top level event labels are provided for each video during training, and the primitive action components are not labeled.
We require our algorithm to be robust to missing frames, temporary occlusion of body parts, background clutter, and to variations in activity styles and durations. Furthermore, our models gracefully scale to complex events containing human-human and human-object interactions, while not assuming access to perfect pedestrian or object detection results.
We have proposed and adopted the design philosophy of combining global statistics of local spatio-temporal features, with the high level structure and constraints provided by dynamic probabilistic graphical models. We present four different algorithms for activity recognition, spanning the feature-classifier hierarchy in terms of their semantic and structure modeling capability. Firstly, we present a novel Latent CRF classifier for modeling the local neighborhood structure of spatio-temporal interest point features in terms of code-word co-occurrence statistics, which captures the local temporal dynamics present in the action. In our second work, we present a multiple kernel learning framework to combine human pose estimates generated from a collection of kinematic tree priors, spanning the range of expected pose dynamics in human actions. In our third work, we present a latent CRF model for automatically identifying and inferring the temporal location of key-poses of an activity, and show results on detecting multiple instances of actions in continuous un-segmented videos. Lastly, we propose a novel dynamic multi-state feature pooling algorithm which identifies the discriminative segments of a video, and is robust to arbitrary gaps between state transitions, and also to significant variations in state durations. We evaluate our models on short, medium and long term activity datasets, and show state of the art performance on both classification, detection and video streaming tasks.
Location: 223
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Pramod Sharma
Fri, May 02, 2014 @ 03:45 PM - 05:45 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Candidate: Pramod Sharma
Title: Effective Incremental Learning and Detector Adaptation Methods for Video Object Detection
Date: Friday, May 2nd, 2014
Time: 3:45 PM
Location EEB 248
Committee:
Ram Nevatia (chair)
Gerard Medioni
C. -C. Jay Kuo (outside member)
Abstract:
Object detection is a challenging problem in Computer Vision. With increasing use of social media, smart phones and modern digital cameras thousands of videos are uploaded on the Internet everyday. Object detection is very critical for analyzing these videos for many tasks such as summarization, description, scene analysis, tracking or activity recognition.
Typically, an object detector is trained in an offline manner by collecting thousands of positive and negative training samples. However, due to large variations in appearance, pose, illumination, background scene and similarity to other objects; it is very difficult to train a generalized object detector that can give high performance across different test videos. We address this problem by proposing detector adaptation methods which collect online samples from a given text video and train an adaptive/incremental classifier using this training data in order to achieve high performance.
First we propose an efficient incremental learning method for cascade of boosted classifiers, which collects training data in a supervised manner and adjusts the parameters of offline trained cascade of boosted classifiers by combining online loss with offline loss. Then, we propose an unsupervised incremental learning approach which collects online samples automatically from a given test video using tracking information. However online samples collected in an unsupervised manner are prone to the labeling errors, hence instead of assigning hard labels to online samples, we utilize Multiple Instance Learning (MIL) approach and assign labels to the bags of instances not to the individual samples. We propose an MIL loss function for Real Adaboost framework to train our incremental detector.
While the above approach gives good performance, it is limited to Real Adaboost based offline trained detector. We propose an efficient detector adaptation method which works with various kinds of offline trained detectors. In this approach first we apply offline trained detector at a high threshold to obtain confident detection responses. These detection responses are tracked using a tracking-by-detection method and using obtained detection responses and tracking output online samples are collected. However, positive online samples can have different articulations and pose variations. Hence they are divided into different categories using a pose classifier trained in the offline setting. We train a multi-class random fern adaptive classifier using collected online samples. During testing stage, first we apply offline trained detector at a low threshold, then we apply adaptive classifier on the obtained detection responses, which either accepts the detection response as a true response or rejects it as the false alarm. In this manner, we focus on improving the precision of offline trained detector.
We extend this approach by proposing a multi-class boosted random fern adaptive classifier in order to select discriminative random ferns for high detection performance. We further incorporate MIL in boosted random fern framework and propose a boosted multi instance random fern adaptive classifier. Boosting provides discriminability to the adaptive classifier, whereas MIL provides robustness towards noisy and ambiguous training samples. We demonstrate the effectiveness of our approaches by evaluating them on several public datasets for the problem of human detection.
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Borom Tunwattanapong
Mon, May 05, 2014 @ 10:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Spherical Harmonic and Point Illumination Basis for Reflectometry and Relighting
PhD Candidate: Borom Tunwattanapong
Time: Mon, May 5, 2014 @ 10:00 AM - 12:00 PM
Location: SAL 322
Committee:
Paul Debevec (chair)
Abhijeet Ghosh (Imperial College London)
Ulrich Neumann
Andreas Kratky (Cinematic Arts, outside member)
Abstract:
Digitally recording realistic models of real-world objects is a longstanding problem in computer graphics and vision, with countless applications in cultural heritage preservation, industrial design, visual effects, on-line commerce and interactive entertainment. The main goal is acquiring digital models which can be used to render how the object would look from any viewpoint, reflecting the light of any environment, allowing the digital model to represent the object faithfully in a virtual world.
This dissertation presents a system for acquiring spatially-varying reflectance information and relighting various surface types by observing the objects under active basis illumination. For most type of real-world objects, they are illuminated with a succession of spherical harmonic illumination conditions. From the response of the object to the harmonics, we can separate diffuse and specular reflections, estimate world-space diffuse and specular normals, and compute anisotropic roughness parameters for each view of the object.
For objects with complicated reflectance or geometry, this work proposes a system that practically acquires relightable and editable model of the objects. The system employs a combination of spherical harmonics and local illumination which reduces the number of required photographs by an order of magnitude compare to the traditional technique.
Additionally, for faces, this work proposes a novel technique to rapidly capture and estimate reflectance properties using an array of cameras and flashes. The reflectance properties can also be used to reconstruct the complete 3-D models of the face.
Location: Henry Salvatori Computer Science Center (SAL) - 322
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense- Bei (Penny) Pan
Wed, May 07, 2014 @ 10:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Candidate: Bei (Penny) Pan
Title:
Utilizing Real-World Traffic Data to Forecast the Impact by Traffic Incidents
Committee:
Cyrus Shahabi (chair)
Craig Knoblock
Genevieve Giuliano (outside member)
Abstract:
For the first time, real-time high-fidelity spatiotemporal data on transportation networks of major cities have become available. This gold mine of data can be utilized to learn about the behavior of traffic congestion at different times and locations, potentially resulting in major savings in time and fuel, the two important commodities of the 21st century. Therefore, how to mine valuable information from this data to enable next-generation technologies for unprecedented convenience, have become key topics in spatiotemporal data mining. By utilizing real-world transportation related datasets, this thesis focuses on the address the problems related with impact of traffic incidents. Traffic incidents refer to non-recurring issues occurred on the road network, such as traffic accidents, weather hazard, special events and construction zone closures, which contributes to approximately 50% for traffic congestion.
First, this thesis addresses the fundamental problem of traffic prediction in the presence of traffic incidents by utilizing traffic sensor data & incident reports collected in Los Angeles road networks. The proposed prediction overcomes the deficiency of traditional time-series prediction techniques by considering the unique characteristics for traffic speed time series. Then through the same dataset, this thesis proposes a set of methods to predict the dynamic evolvement for the impact of incidents. Through the surrounding traffic data of traffic incidents, this thesis models the propagation behavior of congestions caused by archived incidents, and develops a set of clustering-based techniques for predicting the similar behavior in the future. Thirdly, besides sensor data, this thesis also mines social media and GPS trajectories for better understanding of the cause of traffic incidents. Specifically, by identifying the unusual travelling behaviors and twitter-like posts from data collected in Beijing, this work detects and analyzes the impact of traffic incidents. Finally, this thesis analyzes the causality relationship among freeway traffic and arterial traffic to provide a comprehensive prediction of incidents’ impact on both freeways and arterial streets. As a result, the next-generation navigation applications built based on the approaches discussed in this thesis can help drivers to effectively avoid the impacted area in real-time and thereby save them considerable amount of travel time.
Location: Ronald Tutor Hall of Engineering (RTH) - 306
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Megha Gupta
Wed, May 07, 2014 @ 10:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
University Calendar
Ph.D. Candidate: Megha Gupta
Committee members:
Gaurav Sukhatme (chair)
Stefan Schaal
Bhaskar Krishnamachari (outside member)
Time: May 7, 2014, 10 am
Location: Ronald Tutor Hall (RTH), room 422
Title: Intelligent Robotic Manipulation of Cluttered Environments
Abstract:
Robotic household assistants of the future will need to understand their environment in real-time with high accuracy. There are two problems that make this challenging for robots. First, human environments are typically cluttered, containing a lot of objects of all kinds, shapes and sizes, in close proximity. This introduces errors in the robot's perception and manipulation. Second, human environments are highly varied. Improving a robot's perceptual abilities can tackle these challenge only partially. A robot's ability to manipulate its environment can help in enabling and overcoming the limits of perception.
We test this idea in the context of sorting and searching in cluttered, bounded, and partially observable environments. The inherent uncertainty in the world state forces the robot to adopt an observe-plan-act strategy where perception, planning, and execution are interleaved. Since execution of an action may result in revealing information about the world that was unknown hitherto, a new plan needs to be generated as a consequence of the robots actions . Since manipulation is typically expensive on a robot, our goal is to reduce the number of object manipulations required to complete the desired task.
We present a robust pipeline that combines manipulation-aided perception and grasping in the context of sorting objects on a tabletop. We present an adaptive look-ahead algorithm for exploration by prehensile and non-prehensile manipulation of the objects it contains. Finally, we add contextual structure to the world in the form of object-object co-occurrence relations and present an algorithm that uses context to guide the object search. We evaluate our planners through simulations and real-world experiments on the PR2 robot and show that purposeful manipulation of clutter to aid perception becomes increasingly useful (and essential) as the clutter in the environment increases.
Location: Ronald Tutor Hall of Engineering (RTH) - 422
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Paul Graham
Thu, May 08, 2014 @ 10:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title:
A Framework for High-Resolution, High-Fidelity, Inexpensive Facial Scanning
PhD Candidate: Paul Graham
Committee:
Paul Debevec (chair)
Gerard Medioni
Michelle Povinelli (outside member)
Hao Li
Abhijeet Ghosh
Abstract:
We present a framework for high-resolution, high-fidelity, inexpensive facial scanning. The framework combines the speed and cost of passive lighting scanning systems with the fidelity of active lighting systems. The subject is first scanned at the mesoscale, the scale of pores and fine wrinkles. The process is a near-instant method for acquiring facial geometry and reflectance with 24 DSLR cameras and ten flashes. The flashes are fired in rapid succession with subsets of the cameras, which are specially arranged to produce an even distribution of specular highlights on the face. The total capture time is less than the mechanical movement of the eyelid in the human blink reflex. We use this set of acquired images to estimate diffuse color, specular intensity, and surface orientation at each point on the face. With a single photo per camera, we optimize the facial geometry to maximize the consistency of diffuse reflection and minimize the variance of specular highlights using an energy-minimization message-passing technique. This allows the final sub-millimeter surface detail to be obtained via shape-from-specularity, even though every photo is from a different viewpoint. The final system uses commodity components and produces models suitable for generating high-quality digital human characters. The mesostructure is enhanced to include microgeometry through the scanning of skin patches around the face. We digitize the exemplar patches with a polarization-based computational illumination technique which considers specular reflection and single scattering. The recorded microstructure patches can be used to synthesize full-facial microstructure detail for either the same subject or a different subject with similar skin types. We show that the technique allows for greater realism in facial renderings including a more accurate reproduction of skin's specular reflection effects. A microstructure database is provided for easy cross-subject synthesis during the enhancement stage. Additionally, a multi-view camera calibration technique is introduced. This new technique can be accomplished with a single view from each camera of a cylinder wrapped in a checkerboard pattern. It is fast and resolves extrinsic and intrinsic camera parameters to a sub-pixel re-projection error.
Location: Henry Salvatori Computer Science Center (SAL) - 322
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Juan P. Fasola
Thu, May 08, 2014 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Defense - Juan P. Fasola
Thursday, May 08, 2014 @ 2:00 PM - 4:00 PM
RTH 406
Computer Science
PhD Candidate: Juan P. Fasola
Title:
Socially Assistive and Service Robotics for Older Adults:
Methodologies for Motivating Exercise and Following Spatial Language Instructions in Discourse
Committee:
Maja J Mataric' (chair)
Gaurav S. Sukhatme
Aaron Hagedorn (outside member)
Abstract:
The growing population of aging adults is increasing the demand for healthcare services worldwide. Socially assistive robotics (SAR) and service robotics have the potential to aid in addressing the needs of the growing elderly population by promoting health benefits, independent living, and improved quality of life. For such robots to become ubiquitous in real-world human environments, they will need to interact with and learn from non-expert users in a manner that is both natural and practical for the users. In particular, such robots will need to be capable of understanding natural language instructions in order to learn new tasks and receive guidance and feedback on task execution.
Research into SAR and service robotics-based solutions for non-expert users, and in particular older adults, that spans varied assistive tasks generally falls within one of two distinct areas: 1) robot-guided interaction, and 2) user-guided interaction. This dissertation contributes to both of these research areas.
To address robot-guided interaction, this dissertation presents the design methodology, implementation and evaluation details of a novel SAR approach to motivate and engage elderly users in simple physical exercise. The approach incorporates insights from psychology research into intrinsic motivation and contributes five clear design principles for SAR-based therapeutic interventions. To evaluate the approach and its effectiveness in gaining user acceptance and motivating physical exercise, it was implemented as an integrated system and three user studies were conducted with older adults, to investigate: 1) the effect of praise and relational discourse in the system towards increasing user motivation; 2) the role of user autonomy and choice within the interaction; and 3) the effect of embodiment in the system by comparing user evaluations of similar physically and virtually embodied SAR exercise coaches in addition to evaluating the overall SAR system.
To address user-guided interactions, specifically with non-expert users through the use of natural language instructions, this dissertation presents a novel methodology that allows service robots to interpret and follow spatial language instructions, with and without user-specified natural language constraints and/or unvoiced pragmatic constraints. This work contributes a general computational framework for the representation of dynamic spatial relations, with both local and global properties. The methodology also contributes a probabilistic approach in the inference of instruction semantics; a general approach for interpreting object pick-and-place tasks; and a novel probabilistic algorithm for the automatic extraction of contextually and semantically valid instruction sequences from unconstrained spatial language discourse, including those containing anaphoric reference expressions. The spatial language interpretation methodology was evaluated in simulation, on two different physical robot platforms, and in a user study conducted with older adults for validation with target users.
Location: Ronald Tutor Hall of Engineering (RTH) - 406
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Chung-Cheng Chiu
Tue, May 13, 2014 @ 12:00 PM - 02:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Generating Gestures from Speech for Virtual Humans Using Machine Learning Approaches
PhD Candidate: Chung-Cheng Chiu
Committee:
Stacy Marsella (Chair)
Jonathan Gratch
Louis-Philippe Morency
Ulrich Neumann
Stephen Read (outside member)
Time: 12pm
Location: EEB 248
There is a growing demand for animated characters capable of simulating face-to-face interaction using the same verbal and nonverbal behavior that people use. For example, research in virtual human technology seeks to create autonomous characters capable of interacting with humans using spoken dialog. Further, as video games have moved beyond first person shooters, there is a tendency for gameplay to comprise more and more social interaction where virtual characters interact with each other and with the player's avatar. Common to these applications, the autonomous characters are expected to exhibit behaviors resembling a real human.
The focus of this work is generating realistic gestures for virtual characters, specifically the coverbal gestures that are performed in close relation to the content and timing of speech. A conventional approach for animating gestures is to construct gesture animations for each utterance the character speaks, by handcrafting animations or using motion capture techniques. The problem with this approach is that it is costly in time and money and is not even feasible for characters designed to generate novel utterances on the fly.
This thesis address using machine learning approaches to learn a data-driven gesture generator from human conversational data that can generate behavior for novel utterances and therefore saves development effort. This work assumes that learning to generates from speech is a feasible task. The framework exploits a classification scheme about gestures to provide domain knowledge about gestures and help the machine learning models to realize the generation of gestures from speech. The framework is composed of two components: one realizes the relation between speech and gesture classes and the other performs gesture generation based on the gesture classes. To facilitate the training process this research has collected a real-world conversation data involving dyadic interviews and a set of motion capture data for human gesturing while speaking. The evaluation experiments assess the effectiveness of each component by comparing with state-of-the-art approaches and evaluate the overall performance by conducting studies involving human subjective evaluations. An alternative machine learning framework has also been proposed to compare with the framework addressed in this thesis. The evaluation experiments have shown the framework outperforms state-of-the-art approaches.
The central contribution of this research is a machine learning framework that capable of learning to generate gestures from the conversation data collected from different individuals while preserving the motion style of specific speakers. In addition, our framework will allow the incorporation of data recorded through other media and thereby significantly enrich the training data. The resulting model provides an automatic approach for deriving a gesture generator which realizes the relation between speech and gestures. A secondary contribution is a novel time-series prediction algorithm that predict gestures from the utterance. This prediction algorithm can address time-series problems with complex input and be applied to other applications for classifying time series data.
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Joshua Garcia
Wed, May 14, 2014 @ 01:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title:
A Unified Framework for Identifying and Studying Architectural Decay of Software Systems
Ph.D Candidate: Joshua Garcia
Time: 1:00pm
Date: May 14, 2014
Location: PHE 223
Committee:
Nenad Medvidovic (Chair)
William G.J. Halfond
Stan Settles (Outside Member)
Abstract:
The effort and cost of software maintenance tends to dominate other activities in a software system's lifecycle. A critical aspect of maintenance is understanding and updating a software system's architecture. However, the maintenance of a system's architecture is exacerbated by the related phenomena of architectural drift and erosion---collectively called architectural decay---which are caused by careless, unintended addition, removal, and/or modification of architectural design decisions. These phenomena make the architecture more difficult to understand and maintain and, in more severe cases, can lead to errors that result in wasted effort or loss of time or money. To deal with architectural decay, an engineer must be able to obtain (1) the current architecture of her system and understand (2) the possible types of decay that may occur in a software system and (3) the manner in which architectures tend to change and the decay it often causes.
The high-level contribution of this dissertation is a unified framework for addressing different aspects of architectural decay in software systems. This framework includes a catalog comprising an expansive list of architectural smells (i.e., architectural-decay instances) and a means of identifying such smells in software architectures; a framework for constructing ground-truth architectures to aid the evaluation of automated recovery techniques; ARC, a novel recovery approach that is accurate and extracts rich architectural abstractions; and ARCADE, a framework for the study of architectural change and decay. Together, these aspects of the unified framework are a comprehensive means of addressing the different problems that arise due to architectural decay.
This dissertation provides several evaluations of its different contributions: it presents case studies of architectural smells, describes lessons learned from applying the ground-truth recovery framework, compares architecture-recovery techniques along multiple accuracy measures, and contributes the most extensive empirical study of architectural change and decay to date. This dissertation's comparative analysis of architecture-recovery techniques addresses several shortcomings of previous analyses, including the quality of ground truth utilized, the selection of recovery techniques to be analyzed, and the limited number of perspectives from which the techniques are evaluated. The empirical study of architectural change and decay in this dissertation is the largest empirical study to date of its kind in long-lived software systems; the study comprises over 112 million source-lines-of-code and 460 system versions.
Location: Charles Lee Powell Hall (PHE) - 223
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Qunzhi Zhou
Wed, May 14, 2014 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: A Complex Event Processing Framework for Holistic Fast Data Management
Ph.D Candidate: Qunzhi Zhou
Defense Committee:
Viktor Prasanna (Co-Chair)
Yogesh Simmhan (Co-Chair)
Ellis Horowitz
Petros Ioannou
Time: 2:00 PM - 4:00 PM @ Wednesday, May 14, 2014
Location: Hughes Aircraft Electrical Engineering Building (EEB) 248
Abstract:
Emerging applications in domains like Smart Grid, e-commerce and financial services have been motivating Fast Data which emphasizes the Velocity aspect of Big Data. Utility companies, social media and financial institutions often face scenarios where they need to process data arriving continuously at high rate for businesses innovation and analytics. Existing Big Data management systems however have mostly focused on the Volume aspect of Big Data. Systems including Hadoop and NoSQL databases provide programming and query primitives that allow scalable storage and querying of very large data sets. These systems are best suited for applications that perform write-once-read-many operations on slow-changing data volumes for their focuses on data availability and read performance.
Complex Event Processing (CEP), on the other hand, is a promising paradigm to manage Fast Data. CEP is recognized for online analytics of data that arrive continuously from ubiquitous, always-on sensors and digital event streams. It allows event patterns composed with correlation constraints, also called complex events, to be detected from examining event streams in realtime for situation awareness. Specifically, CEP adopts high throughput temporal pattern matching algorithms to handle data Velocity. As a result, CEP has grown popular for operational intelligence where online pattern detection drives realtime response.
Fast Data management motivates certain distinctive capabilities from CEP systems to deal with concurrent data Variety, Volume and Velocity. In this dissertation, we present a Complex Event Processing framework for holistic Fast Data management that considers all the 3 Vââ¬â¢s. In particular, we extend the state-of-the-art CEP systems and make the following contributions: 1) Semantic Complex Event Processing for on-the-fly query processing over diverse data streams, shielding data and domain Varieties; 2) Stateful Complex Event Processing that provides a hierarchical query paradigm for dynamic stream Volume management and on-demand query evaluation; 3) Resilient Complex Event Processing that supports integrated querying across low-Velocity data archives and realtime data streams. We perform quantitative evaluations using real-world applications from Smart Grid domain to verify the efficacy of the proposed framework and demonstrate the performance benefits of the optimization techniques.
Bio:
Qunzhi Zhou is currently a PhD candidate in the Computer Science Department at the University of Southern California. His research interests are in information integration, stream processing and distributed computing systems. He has a M.S. in Computer Science from University of Southern California and received his B.S. in Automation from Tsinghua University, China.
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Ranjan Pal
Tue, May 27, 2014 @ 10:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
University Calendar
Thesis Title: Improving Network Security Through Insurance: A Tale of Cyber-Insurance Markets
PhD Candidate: Ranjan Pal
Date: 27th May, 2014
Location: GFS 112
Time: 10am
Committee - Leana Golubchik (Chair), Konstantinos Psounis (Co-Chair), Minlan Yu, Viktor Prasanna (Outside Member)
Abstract:
In recent years, security researchers have well established the fact that technical security solutions alone will not result in a robust cyberspace due to several issues jointly related to the economics and technology of computer security. In this regard some of them proposed cyber-insurance as a suitable risk management technique that has the potential to jointly align with the various incentives of security vendors (e.g., Symantec, Microsoft, etc.), cyber-insurers (e.g., security vendors, ISPs, cloud providers, etc.), regulatory agencies (e.g., government), and network users (individuals and organizations), in turn paving the way for robust cyber-security. In this work, we theoretically investigate the following important question: can cyber-insurance really improve the security in a network? To answer our question we adopt a market-based approach. We analyze regulated monopolistic and competitive cyber-insurance markets in our work, where the market elements consist of risk-averse cyber-insurers, risk-averse network users, a regulatory agency, and security vendors (SVs). Our analysis proves that technical solutions will alone not result in optimal network security, and leads to two important results: (i) without contract discrimination amongst users, there always exists a unique market equilibrium for both market types, but the equilibrium is inefficient and does not improve network security, and (ii) in monopoly markets, contract discrimination amongst users results in a unique market equilibrium that is efficient and results in improvement of network security - however, the cyber-insurer can make zero expected profit. The latter fact is often sufficient to de-incentivize the formation or practical realization of successful and stable cyber-insurance markets.
To alleviate the insurerââ¬â¢s problem of potentially making zero profits, we suggest two mechanisms: (a) the SV could enter into a business relationship with the insurer and lock the latterââ¬â¢s clients in using security products manufactured by the SV. In return for the increased sale of its products, the SV could split the average profit per consumer with the insurer, and (b) the SV could itself be the insurer and account for logical/social network information of its clients to price them. In this regard, we study homogenous, heterogeneous, and binary pricing mechanisms designed via a common Stackelberg pricing game framework. The binary pricing game turns out to be NP-hard, for which we develop an efficient randomized approximation algorithm that achieves insurer profits up to 0.878 of the optimal solution. Our game analysis combined with simulation results on practical networking topologies illustrate increased maximum profits for the insurer (SV) at market equilibrium and always generate strictly positive profits for the latter, when compared to current SV pricing mechanisms in practice. In addition, the state of improved network security remains intact.
Location: Grace Ford Salvatori Hall Of Letters, Arts & Sciences (GFS) - 112
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
Phd Defense - Shuai Hao
Tue, May 27, 2014 @ 01:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Toward Understanding Mobile Apps at Scale
Ph.D Candidate: Shuai Hao
Time: 1:00pm
Date: May 27, 2014
Location: RTH 306
Committee:
Ramesh Govindan (Co-Chair)
William G.J. Halfond (Co-Chair)
Leana Golubchik
Sandeep Gupta (Outside Member)
The mobile app ecosystem has experienced tremendous growth in the last decade. This has triggered active research on dynamic analysis of energy, performance,and security properties of mobile apps. There is, however, a lack of tools that can accelerate and scale these studies to the size of an entire app marketplace. In this dissertation, we present three pieces of work that can help researchers and developers move toward this direction.
First, we present a new approach that can provide fine-grained estimates of mobile app energy consumption. We achieve this by using a novel combination of program analysis and per-instruction energy modeling. Our Android prototype, called eLens, shows that our approach is both accurate and lightweight. We believe that the development of energy efficient mobile apps will be accelerated with eLens.
Then, we introduce a framework, called SIF, for selective app instrumentation. SIF contains two high-level programming abstractions: codepoint sets and path sets. Additionally, SIF also provides users with overhead estimates for specified instrumentation tasks. By implementing a diverse set of tasks, we show that SIF abstractions are compact and precise and its overhead estimates are accurate. We expect the release of SIF can accelerate studies of the mobile app ecosystem.
Last, we focus on programming framework for dynamic analysis of mobile apps. This is motivated by the fact that existing research has largely developed analysis-specific UI automation techniques, where the logic for exploring app execution is intertwined with the logic for analyzing app properties. PUMA is a programmable framework that separates these two concerns. It contains a generic UI-Automation capability and exposes high-level events for which users can define handlers. We demonstrate the capabilities of PUMA by analyzing seven distinct performance, security, and correctness properties over 3,600 marketplace apps.
Location: Ronald Tutor Hall of Engineering (RTH) - 306
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Xiaoming Zheng
Thu, May 29, 2014 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
Thesis Title: Auction and Negotiation Algorithms for Decentralized Task Allocation
Date: May 29th, 2014
Time: 2:00 pm
Location: GFS 111
Committee: Prof. Sven Koenig (Chair)
Prof. Craig Tovey
Prof. David Kempe
Prof. Maged Dessouky (Outside Member)
Abstract:
It is often important to coordinate a team of robots
well in a distributed computing environment. In this dissertation, we study how to allocate and re-allocate tasks to distributed robots so that the team cost is as small as possible (= the team performance is as high as possible). Researchers have developed several algorithms based on auction-like and negotiation-like protocols for decentralized task allocation. However, the majority of these existing algorithms use either single-item auctions, in which only one task is allocated to some robot in one round so that the team cost increases the least, or single-item exchanges, in which only one task is transferred between two robots in one round so that the team cost decreases the most. These algorithms usually result in highly sub-optimal allocations and do not apply to complex tasks that need to be executed by more than one robot
simultaneously.
We develop a new auction algorithm, called sequential auctions
with bundles, that extends single-item auctions to be able to
allocate more than one task to robots in one round so that the team cost increases the least. We introduce a novel data structure, called bid trees, that each robot can construct and submit to the auctioneer independently. Theoretical results show that the bids from bid trees can succinctly characterize all necessary local information of robots needed by the auctioneer to allocate multiple tasks to robots in one round so that the team cost increases the least. Experimental results show that sequential auctions with bundles reduce the team costs of single-item auctions significantly.
We develop a new negotiation algorithm, called sequential
negotiations with K-swaps, that extends single-item exchanges to
be able to re-allocate more than one task among robots in one round so that the team cost decreases the most. We introduce a novel data structure, called partial k-swaps, that each robot can
construct and propose to other robots independently. Theoretical
results show that profitable partial k-swaps can succinctly
characterize all necessary local information of robots needed to
re-allocate multiple tasks among them so that the team cost
decreases the most. Experimental results show that sequential
negotiations with K-swaps reduce the team costs of given
initial allocations significantly.
We develop a new auction algorithm, called sequential auctions
with reaction functions, that extends single-item auctions to be
able to allocate either a simple or complex task to robots in one
round so that the team cost increases the least. We introduce a novel data structure, called reaction functions, that each robot can construct and submit to the auctioneer independently. Theoretical results show that reaction functions can succinctly characterize all necessary local information of robots needed by the auctioneer to allocate either a simple or complex task to robots in one round so that the team cost increases the least. Experimental results show that sequential auctions with reaction functions reduce the team costs of an existing auction algorithm significantly.
Finally, we develop a new negotiation algorithm, called sequential negotiations with reaction functions, that extends single-item exchanges to be able to re-allocate complex or simple tasks among robots in one round so that the team cost decreases the most. Theoretical results show that reaction functions can succinctly characterize all necessary local information of robots needed to re-allocate complex or simple tasks among them so that the team cost decreases the most. Experimental results show that sequential negotiations with reaction functions reduce the team costs of given initial allocations significantly.
To summarize, in this dissertation we develop new auction and
negotiation algorithms for solving task-allocation problems with
simple and complex tasks and demonstrate empirically that
these new algorithms reduce the team costs of existing ones
significantly.
Location: Grace Ford Salvatori Hall Of Letters, Arts & Sciences (GFS) - 111
Audiences: Everyone Is Invited
Contact: Lizsl De Leon