Logo: University of Southern California

Leading Experts Ponder the Problems of Exascale Computing

How do you program the equivalent of a billion laptops running simultaneously?
By: Eric Mankin
August 19, 2011 —

Experts from all over the world came to the Marina del Rey, California campus of the USC Information Sciences Institute (ISI) recently to discuss the challenges of “Exascale” computing.  The workshop conference was under the auspices of U.S. Department of Energy’s Office of Science and Office of Advanced Scientific Computing Research (ASCR). Their conclusions will be reported soon.

ISI's Robert 'Bob' Lucas gave the kickoff presentation for the Exascale event
Founded in 1972 and one of the birthplaces of the Internet, ISI is part of the Viterbi School of Engineering and has a long been a center for advanced research in many areas of computing, including ways to share, manipulate and extract information from massive quantities of data.

In recent years, ISI and sister institutions have encountered systemic problems in ultrahigh performance computing, which they discussed at "Exascale and Beyond: Gaps in Research, Gaps in our Thinking".

Robert Lucas, the director of ISI’s Computational Science group, described these emerging issues in a presentation that began the two-day event, entitled “Exascale: Can My Code Get from Here to There?”

Exascale (10^18 operations per second, or extreme scale’) refers to new computing systems which run at a rates of more than one million trillion floating point operations (flops) per second – that is to say, performing the computing functions of 1 billion laptop computers running simultaneously. This is stretching the limits of the state of the programming arts.

“Today's high-end scientific and engineering software is formulated to fit an execution model that we have evolved to over half a century,” noted Lucas.

Computers are no longer single standalone machines that do what they have to do alone and independently inside a single information processing chip. Instead, engineers have been moving to systems in which computing functions and memory are distributed across vast industrial parks full of linked processing and memory units.

Which creates a different world for programmers, Lucas noted. “It is getting increasingly difficult for application developers to map their codes to today's petascale systems given a programming environment that is cobbled together from a mixture of programming languages, extensions, and libraries, almost all designed for the systems fielded in the last millennium,  Features expected in Exascale systems that are not well represented in today's programming model. The objective will be to bring these problems to the fore, not propose solutions to them.”

Click on the logo to read a HPCwire report on the conference.
As the “statement of challenges to be addressed” for the event noted, “Energy efficiency constraints and growth in explicit on-chip parallelism will require a mass migration to new algorithms and software architecture that is as broad and disruptive as the migration from vector to parallel computing systems that occurred 15 years ago.”

The conference attracted participants from all over the computer science world, including researchers from the Argonne, Sandia, Lawrence Livermore, Lawrence Berkeley, Los Alamos and Oakridge National Laboratories, as well as from MIT, Carnegie Mellon, and other major universities.

ISI researchers Pedro Diniz and Jacqueline Chame were part of the event along with their longtime ISI colleague Mary Hall, now at the University of Utah. ISI's Larry Godinez coordinated venue arrangements.

The event was a working conference, trying to create recommendations for ways to deal with and solve the problems involved in Exascale computing. A report detailing these recommendations is scheduled for publication September 15.