Vincent Keller and Ralf Gruber from EPF Lausanne just published a book Titled “HPC@Green IT – Green High Performance Computing Methods” (Springer).
An interview with the two authors has already been published by hpcwire. hpc-ch additionally interviewed Vincent and Ralf about the “Swiss” aspects of their book.
Q: What is your background and why did you write this book at this moment?
Vince: My background is in computer science with undergraduate studies at University of Geneva, and a diploma thesis under the supervision of Prof. Bastien Chopard. This gave me a first touch on computational science, a subtle mix of computer science and physics, chemistry and other basic sciences. During one year of research at the University Hospitals of Geneva on the development of a blood flow simulation tool in stented cerebral aneurysms, I strengthened my background in computational science. I was ready to jump into PhD studies under the supervision of Ralf Gruber. The goal was to find a solution to the longstanding question: “which machine for a given (real) application?” A cost model was developed in which the entire turnaround costs are minimized under constraints given by the user: execution costs on the target machine due to investment and infrastructure, to energy consumption, to data transfer, to waiting time, etc. The scheduling system then chooses the less expensive computer. This complete framework is the basis of the European project IANOS for which I was the scientific coordinator in 2009. We recognized that it is possible to automatically point out the energy-eager machines with that model. This was the beginning of the “green” orientation of our research. We finally recognized the lack of a book on that subject, and decided to write HPC@GreenIT.
Ralf: I have a PhD in plasma physics. Retired from EPF Lausanne since 3 years I continue research in HPC methods (HPCM), and in numerical aspects to precisely satisfy constraints in partial differential operators. This is really COOL. In fact, the book HPC@GreenIT includes most of the material presented in Vince’s thesis, completed with material presented in my doctoral program course on “High Performance Computing Methods”.
Q: You presented your book at the Supercomputing Conference SC11 in New Orleans. What has been the feedback?
Vince: We got a good feedback because the book treats many aspects of energy reduction in the high performance computing field. This feedback gives the opportunity to go deeper into the subject, to correct some parts, and of course, to think about a future Second Edition with the last bleeding edge results. The book is well cited today.
Q: How do Switzerland compare about “green” (super)computing with respect to other nations? Are we too rich to think about reducing energy costs?
Vince: Switzerland is not an exception. The power consumption of the high-end supercomputers grows every year, this is a fact. If Switzerland wants to stay on the top spot of the HPC leading countries, Switzerland must go towards GreenIT and massively invest there. There are already examples where big Swiss institutions reach the full electric capacity of their IT rooms, and must build a new one to keep the pace of their scientists. The time of building big dams in the Alps is over, nuclear fission for electricity production is a dead-end; the future is in energy reduction. In other words: making more science with the same amount of energy, ideally less, must be the goal of our deciders. The “richness” of Switzerland is therefore an asset rather than a disadvantage: money can be – is already – invested in energy efficiency projects, especially in high-end computing efficiency projects.
Ralf: In the US computing centres are built with up to 25 MW electric power. In Switzerland, the CSCS is planning a new centre with a few MW installed. And this trend continues, one already talks about the exaflop (one exaflop per second: 1 EF/s = 1018 operations per second) machine in 2018. Consider that each node in a parallel machine consumes today about 200 Watts, half for the processor, half for the main memory subsystem. Such a node delivers perhaps 200 GF/s thanks to an increasing number of cores. To reach one EF/s, millions of such nodes must be put together. This would lead to a power consumption comparable to a nuclear power plant. Do we really need an exaflop machine? Or should we rather build computers that are adapted to HPC application needs, and, as a consequence, consume much less energy? I believe that Switzerland could be the first country to do so. This could lead to a few installations, each one taking care of a type of applications. My experience with the Swiss research community is so positive that I could imagine that it would even be possible to design and realize low energy consuming computers for applications that do not need processor performance, but high main memory access or access to data bases. The presently existing Swiss efforts in these domains should carefully be detected and combined in a common effort. We believe that energy reductions by up to one order of magnitude are possible for applications not dominated by processor performance.
Q: What are the most remarkable Swiss projects addressing this topic? (we think for example here at the common project Aquasar between ETH Zurich and IBM)
Vince: Aquasar is a two-years-old project and is already out-of-date. The machine is based on processors (Cell BE) not suited to real science applications with an intrinsic high peak performance on HPL (High Performance LINPACK) of 450 MF/Watt. By comparison, the November 2010 Green500 list shows that the #1 (the prototype of the IBM BlueGene/Q at Watson) achieves 1684 MF/Watt on the same benchmark. The operating frequency of a Cell BE is around 3 GHz whereas the operating frequency of the unusual 17-cores Power architecture (one core is used for the OS) of the BlueGene/Q is around 0.5 GHz (6 times slower meaning 36 times less energy for the same operation). But Aquasar is not useless: it is a groundbreaking benchmark to test new technology such as hot water cooling in order to think about the future of data centers. It has also shown that a very close collaboration between one of the largest supercomputer company IBM and a Swiss institution is possible and works well. There are other laboratories or institutions working on specific parts of a future “green supercomputer”, among them we can cite the LTCM (Laboratory of Heat and Mass Transfer) lead by Prof. John Thome. LTCM people work on an innovative way of cooling electronic components using a two-phase fluid.
Ralf: We should not forget that with the company Supercomputing Systems (SCS) Switzerland has a very advanced player in HPC. SCS is able to design and build hardware and software prototypes. The realization of the inter-node communication network TNet for the Swiss-T1 parallel computer presented in our book is a very good example of their ability. However, to go ahead, we clearly would need co-operations with computer manufacturers and vendors.
Q: What are the research fields on green IT where Swiss researchers are the most active? And in what research fields should we invest?”
Vince: Switzerland has a long history in applications development and is known for that. Both federal schools (ETH Zurich and EPF Lausanne) and the cantonal universities are world class institutions in applications development. They share leading positions in very large projects such as plasma physics for ITER, particle physics at CERN, or the Blue Brain Project at EPF Lausanne. At a national level, the recent HP2C project launched by CSCS, and aiming at encouraging high risk (but high payoff) “local” projects is another example of the Swiss investments in the development of applications. But Switzerland has no “home made” supercomputers, and cannot profit from large military investments as in the United States. Therefore, Switzerland must collaborate with chip manufacturers, supercomputer designers, and constructors to bring them its expertise in the development of applications to design new energy efficient machines. We show in our book HPC@GreenIT that the only way to reduce the energy consumption of a computer is first to understand the needs of the applications. This is definitely (one of) the strengths of Swiss researchers.
Ralf: Perhaps we should not forget that one immediate energy reduction result is certainly in optimizing existent applications. Experiences with the practical work performed during the HPCM courses have shown that most of the applications can be accelerated by factors of 2 and sometimes even far more. This can be achieved by adapting the code to the hardware, or/and by improving the numerical methods. It is easy to do and should be encouraged. A first effort is already made by Vince in the CADMOS project.
Thank you Vince and Ralf!