The last edition of Scientific Computing World asked a number of industry experts the question of how and when we might reach exascale computing.
For Thomas Schultess, professor of Computational Physics and director at the Swiss National Supercomputing Center (CSCS), “The industry at large is in a state of confusion, because exascale as a goal is not well defined. Simply extrapolating from sustained tera- to peta and on to exaflops in terms of the High-Performance Linpack (HPL) benchmark produces machines that may not be useful in applications. Since these machines will be expensive to produce and operate, they will need a real purpose, which HPL no longer represents well”.
For Schulthess the upcoming challenges towards exascale computing can only be overcome if the HPC community approaches exascale in new ways. Irrespective of architectural direction, a massive investment in software and application code development will be required.
In his contribution Schulthess also states “The recently announced Open Power Consortium is something to watch. It will bring fresh competition to the market for latency-optimised cores and open new avenues in hybrid multi-core design. […] We should focus on science areas that require 100- to 1000-fold performance improvements over what is available today, and design supercomputers specifically to solve their problems”.
Read the October/November 2013 digital edition of Scientific Computing World.
To view the entire issue as a PDF, click here »