Logo: University of Southern California

Events Calendar



Select a calendar:



Filter June Events by Event Type:



Events for June 12, 2014

  • Repeating EventSAP TERP 10 Student Certification Academy

    Thu, Jun 12, 2014

    Executive Education

    Conferences, Lectures, & Seminars


    Talk Title: SAP TERP 10 Student Certification Academy

    Abstract: Course Number & Dates:
    Session 2 (SAP 0614-06):
    Monday, June 2nd - Friday, June 6th, 2014
    Monday, June 9th - Thursday, June 12th, 2014
    Certification Exam on Friday, June 13th, 2014

    The University of Southern California, being an active member of SAP's Global University Alliances program since its inception in 1996, has been chosen to offer the TERP10 Academy to its students in early Summer 2014. The TERP10 Academy, and its certification, is a direct response to the global forecast of needed SAP skills in the market, estimated between 30,000 and 40,000, in the next several years. Students completing the TERP10 Academy and passing SAP's certification exam will have the advantage of being equipped with a good understanding of business processes adopted by companies around the world. They will also get insights into best business practices and how SAP can be used to optimize business processes. Students will find that the TERP10 Certification will open internship opportunities as well as full time jobs with consulting firms such as Deloitte, Ernst and Young, KPMG, Hitachi, and other SAP partner companies.

    Host: Professional Programs

    More Info: http://gapp.usc.edu/professional-programs/short-courses/terp10#overview

    Audiences: Registered Attendees

    View All Dates

    Contact: Viterbi Professional Programs

    Event Link: http://gapp.usc.edu/professional-programs/short-courses/terp10#overview

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense- Ashish Vaswani

    Thu, Jun 12, 2014 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Ashish Vaswani

    Date: 12th June, 2014
    Location: GFS 111
    Time: 1pm

    Committee:
    Dr. David Chiang (Chair)
    Dr. Liang Huang (Co-chair)
    Dr. Kevin Knight
    Dr. Jinchi Lv (Outside member)

    Title: Smaller, Faster, and Accurate Models for Statistical Machine Translation

    The goal of machine translation is to translate from one natural language into another using computers. The current dominant approach to machine translation, statistical machine translation (SMT), uses large amounts of training data to automatically learn to translate from the source language to target language. SMT systems typically contain three primary components: word alignment models, translation rules, and language models. These are some of the largest models in all of natural language processing, containing up to a billion parameters. Learning and employing these components pose difficult challenges of scale and generalization: using large models in statistical machine translation can slow down the translation process; learning models with so many parameters can cause them to explain the training data too well, degrading their performance at test time. In this thesis, we improve SMT by addressing these issues of scale and generalization for word alignment, learning translation grammars, and language modeling.

    Word alignments, which are correspondences between pairs of source and target words, are used to derive translation grammars. Good word alignment can result in good translation rules, improving downstream translation quality. We will present an algorithm for training unsupervised word alignment models by using a prior that encourages learning smaller models, which improves both alignment and translation quality on large scale SMT experiments.

    SMT systems typically model the translation process as a sequence of translation steps, each of which uses a translation rule. Most statistical machine translation systems use composed rules (rules that can be formed out of smaller rules in the grammar) to capture more context, improving translation quality. However, composition creates many more rules and large grammars, making both training and decoding inefficient. We will describe an approach that uses Markov models to capture dependencies between a minimal set of translation rules, which leads to a slimmer model, a faster decoder, yet the same translation quality as composed rules.

    Good language models are important for ensuring fluency of translated sentences. Because language models are trained on very large amounts of data, in standard n-gram language models, the number of parameters can grow very quickly, making parameter learning difficult. Neural network language models (NNLMs) can capture distributions over sentences with many fewer parameters. We will present recent work on efficiently learning large-scale, large-vocabulary NNLMs. Integrating these NNLMs into a hierarchical phrase based MT decoder improves translation quality significantly.

    Location: Grace Ford Salvatori Hall Of Letters, Arts & Sciences (GFS) - 111

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File