BEGIN:VCALENDAR
METHOD:PUBLISH
PRODID:-//Apple Computer\, Inc//iCal 1.0//EN
X-WR-CALNAME;VALUE=TEXT:USC
VERSION:2.0
BEGIN:VEVENT
DESCRIPTION:Speaker: Jorge F. Silva, PhD, Universidad de Chile
Talk Title: Information Theoretic Measures for Representation Learning
Abstract: Information-theoretic measures have been widely adopted for machine learning (ML) feature design. Inspired by this, we look at the relationship between information loss in the Shannon sense and the operation loss in the minimum probability of error (MPE) sense when considering a family of lossy representations (or encoders). In this talk, we introduce a series of results that show how adequate the adoption of mutual information (MI) is for predicting the operational quality of a representation in classification. Our findings support the observation that selecting/designing representations that capture informational sufficiency (IS) is appropriate for learning. However, we also show that this selection is rather conservative if the intended goal is achieving MPE in classification. We conclude by discussing the capacity of the information bottleneck (IB) method to achieve lossless prediction and the expressive power of digital encoders in ML.
Biography: Information-theoretic measures have been widely adopted for machine learning (ML) feature design. Inspired by this, we look at the relationship between information loss in the Shannon sense and the operation loss in the minimum probability of error (MPE) sense when considering a family of lossy representations (or encoders). In this talk, we introduce a series of results that show how adequate the adoption of mutual information (MI) is for predicting the operational quality of a representation in classification. Our findings support the observation that selecting/designing representations that capture informational sufficiency (IS) is appropriate for learning. However, we also show that this selection is rather conservative if the intended goal is achieving MPE in classification. We conclude by discussing the capacity of the information bottleneck (IB) method to achieve lossless prediction and the expressive power of digital encoders in ML.
Host: Dr. Eduardo Pavez
SEQUENCE:5
DTSTART:20240223T153000
LOCATION:EEB 132
DTSTAMP:20240223T153000
SUMMARY:ECE Seminar
UID:EC9439B1-FF65-11D6-9973-003065F99D04
DTEND:20240223T163000
END:VEVENT
END:VCALENDAR