Events for December 09, 2010
-
Online Learning in Dynamic Spectrum Access: Restless Bandits, Equilibrium and Social Optimality
Thu, Dec 09, 2010 @ 03:00 AM - 04:30 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Mingyan Liu , Electrical Engineering and Computer Science University of Michigan
Abstract:
Thursday December 9 3 â 4:30 pm EEB 248
Abstract: We consider a dynamic spectrum access problem where the time‐varying condition of a channel (e.g., as a result of random fading or certain primary users' activities) is modeled as an arbitrary finite‐state Markov chain. At each instance of time, a (secondary) user selects and uses a channel and receives a certain reward as a function of the state of the channel (e.g., good channel condition results in higher data rate for the user). Each channel has potentially different state space and statistics, both unknown to the user, who tries to learn which one is the best so it can maximize its usage of the best channel. The objective is to construct good online learning algorithms so as to minimize the difference between the user's performance in total reward and that of using the best channel (on average) had it known which one is the best from a priori knowledge of the channel statistics (also known as the regret). This is an instance of the multiarmed bandit problem, and is well studied when each reward process is iid over time. In our case the reward processes are Markovian, and furthermore, restless, in that the channel conditions will continue to evolve independent of the user's actions. This leads to a restless bandit problem, for which there exists relatively few results on either algorithms or performance bounds in this learning context. We introduce an algorithm that utilizes regenerative cycles of a Markov chain to compute a sample‐mean based index policy, and show that under mild conditions on the state transition probabilities of the Markov chains this algorithm achieves logarithmic regret uniformly over time, and that this regret bound is also optimal. We also show that this result can be easily extended to the case when the user is allowed to use multiple channels at a time. We numerically examine the performance of this algorithm along with a few other algorithms with Gilbert‐Elliot channel models, and discuss how this algorithm may be further improved (in terms of its constant) and how this result may lead to similar bounds for other algorithms.
We then consider this type of online learning in a multiuser setting where simultaneous access to the same channel by multiple users may lead to collision and reduced reward. We show how such multiuser learning converges to a Nash equilibrium of an equivalent game, and how appropriate modifications to the learning algorithms can induce socially optimal channel allocation.
Host: Bhaskar Krishnamachari
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Shane Goodoff
-
Insights on Latent Perceptual Indexing with Applications in Audio and Speech Recognition
Thu, Dec 09, 2010 @ 10:00 AM - 12:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Shiva Sundaram, Senior Research Scientist/ Deutsche Telekom Laboratories (T-Labs), Berlin, Germany
Talk Title: Insights on Latent Perceptual Indexing with Applications in Audio and Speech Recognition
Abstract: One of the main ideas that originated from my thesis work is latent indexing applied to content-based audio retrieval. Coined as Latent Perceptual Indexing/Mapping, it fundamentally uses the information in weighted unit-document co-occurrence measures. The procedure is analogous to latent semantic indexing of text documents except the bag-of-features from the audio clips constitute the documents and the units are obtained by clustering those documents. In this talk, I will present improvements to the basic approach and also present recent results on its application to acoustic modelling for speech recognition. I will also take this opportunity to talk about my related research efforts in affect-based retrieval of audio, salient-event detection in video and natural speech interfaces.
Biography: Shiva Sundaram received his PhD and his MS, both in Electrical Engineering from the University of Southern California (USC) in 2008, and 2003 respectively. He received his Bachelor of Engineering (B.E) degree in Electronics Engineering from the University of Pune, India in 2001. Since November 2008 he has been a Senior Research Scientist with Deutsche Telekom Laboratories (T-Labs) in Berlin, Germany. Before joining T-Labs, he was a research intern in the Speech and Language Technologies Group at Apple. From summer 2002 to fall 2008 he was a research assistant with Prof. Shrikanth Narayanan in the Signal Analysis and Interpretation Lab (SAIL) at the University of Southern California (USC), Los Angeles. His research interests in the area of speech and audio processing includes recognition and synthesis of speech, signal processing for multimedia retrieval, audio perception, and pattern recognition. He has published over 25 scientific articles in international conferences and journals. In 2006, he received the best student paper award in IEEE MMSP workshop for his work in music information retrieval.
Host: Professor Shrikanth Narayanan
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Mary Francis