Agenda hpc-ch Forum on “Handling huge amounts of data for HPC”, Oct. 25th at CSCS

With 46 participants (registration is now closed), the next hpc-ch forum on “Handling huge amounts of data for HPC” will be the largest forum organized by our community. We are very happy that so many people will visit  Lugano and participate to this event jointly organized by CSCS and USI.

Setting the Scope

Many scientific fields are relying on increasingly large data sets. The data can be the result of measurements, as in astrophysics or genomics or be the result of simulations, as in climate research. The buzz word related to this trend is “Big Data”. Our challenge, as provider of HPC systems, will be to handle petabytes and even exabytes of data and to provide customers an efficient way to store and use the generated data. This will mean striking a balance between bandwidth, capital investments, operational costs, security, availability, reliability, etc.

The trend to Big Data in HPC is also modifying the relationships inside our organizations. Up to now the management of data was almost a speciality of the departments providing general IT services but now HPC has the necessity to have a proper Big Data management with a specific knowhow that partially diverge from the standard way to preserve/use data. This opens the discussion about leadership on storage.

Key Questions

  • What are useful storage hierarchies? (scratch, project, archive, …)
  • What are our experiences with Hierarchical Storage Management (HSM) system (HPSS, TSM, DMF, StorNext, others…)?
  • What are the different storage policies we support ?
  • What are the pros and cons between the different parallel file systems we have in use? (Lustre, GPFS, …)
  • How well are those parallel filesystem integrated with HSM solutions?
  • How do we integrate different storage technologies and migrate between different systems over time as technology evolves?
  • How do we provide access to data to remote users and systems? And how far do you have to go to be to be considered as “remote”?
  • How long we need to preserve data?
  • How does new technology help? SSD, or venerable TAPES?
  • The cost of HPC system cpu time decreases, time to solution is reduced, is it cheaper archiving the results or re-running the whole simulation?


Stefano Gorini, CSCS, tel 091 610 82 92
Michele De Lorenzi, CSCS, tel 091 610 82 08
Rolf Krause, USI/ICS, tel. 058 666 43 09


The hpc-ch meeting will already start on Wednesday, October 24th with a common dinner at the Antica Osteria del Porto in Lugano, starting at 19:00. The dinner will be sponsored by the Institute of Computational Science of the Università della Svizzera italiana.

9:45 – 10:10 Coffee and registration

10:10 – 10:20 Greeting and introduction

  • Dominik Ulmer (CSCS)
  • Rolf Krause (USI)
  • Michele De Lorenzi and Stefano Gorini (CSCS)

10:15 – 11:00 Keynote Presentations CSCS & USI

  • The Exascale Challenge, RevisitedSince the early days of computing, storage has always been a challenge. In high performance computing in particular, high-performance and high capacity storage has been of great interest. The problem for HPC centers didn’t change much: how can we best store data, and how can we best move data? After a short journey in the past, we will review the current situation at CSCS and will identify some key upcoming challenges; Luc Corbeil (CSCS)
  • Data compression and HPC: lossy or lossless? Methodological viewpoint and implications I am going to talk about methodological challenges of large data compression, conceptual comparison of lossy and lossless compression algorithms, their advantages/limitations and resulting implications for software and hardware requirements. Examples for compression of: (i) meteorological data (in collaboration with Will Sawyer from CSCS) and (ii) direct numerical flow simulation data will be presented; Illia Horenko (USI)

11:00 – 12:00 Short Presentations I

  • Data Lifecycle management in Life Sciences: Experiences in SystemsX.chIn the past 4 years of data taking and data analysis in the various projects we have learned a great deal about the data management tasks and storage requirements for Life Science groups. The presentation includes a simplistic lifecycle definition for Life Science data in general, with considerations of the various orthogonal aspects and specialities of the individual data taking technologies and research interests;Peter Kunszt (SystemsX)
  • Storage for Bioinformatik @ Biozentrum; Rainer Pöhlmann & Kostantin Arnold (SIB)
  • Big Data @ Vital-IT; Roberto Fabbretti (Vital-IT)
  • RCS – Rail Control System of Swiss Federal RailwaysPlatform with Big Transient Data, HPC, Real Time and HA aspects; Valerio Zanetti (T-Systems)
  • Managing Petabytes of Tape Storage at CERN – German is Section Leader for tape storage at CERN. This includes 82 PB of CASTOR data (for Physics) and 7 PB for backup; German Cancio Melia (CERN)

12:00 – 12:10 Community Development (Michele De Lorenzi)

  • Overview of hpc-ch activities
  • Planing of activities for 2013

12:10 – 13:00 Lunch

13:00 -14:00 Guided tour of CSCS

14:00 – 16:40 Short Presentations II (incl. a short break)

  • Data and Storage Systems @ Uni Bern; David Gurtner (University of Bern)
  • Cost effective Data handling in middle size HPC and DataCenter; Pawel Bednarek (University of Fribourg)
  • Challenges for big data management at PSI in the light of a decade of LHC computing; Derek Feichtinger et al. (PSI)
  • Disk Pool Manager storage systems at the Universities of Bern and GenevaThe Disk Pool Manager (DPM) is a storage solution developed at CERN for Particle Physics experiments in the LHC era. The DPM provides some functionality of a distributed file system. It can be deployed using inexpensive hardware consisting of disk servers running Linux. In this presentation we will briefly describe the functionality of the DPM software; Szymon Gadomski (Université de Genève)
  • Lustre Metadata Scaling – We show Lustre performance’s results of metadata operations with standard disk and with solid state storage using commodity array of disks to guarantee high availability. We also show the improvement of the metadata code and scalability in Lustre version 2.3 and the new measurement tools for the metadata server provided; Gabriele Paciucci (JNET 2000)

16:40 Farewell and end of the meeting