EMPA Joins the hpc-ch Community

We are very happy to announce that Empa joined the hpc-ch community as a regular member. Empa is an interdisciplinary research and services institution for material sciences and technology development within the ETH Domain. Empa’s research and development activities are oriented to meeting the requirements of industry and the needs of our society, and link together applications-oriented research and the practical implementation of new ideas, science and industry, and science and society.

Empa will be represented in hpc-ch by Daniele Passerone and Carlo Pignedoli both working for the theory and atomistic simulation of material group and running the Ipazia Compute Cluster.

We asked Daniele about some background information of the use of HPC at Empa.

Q: What is the relevance of HPC for Empa?

The necessity of investing in HPC was recognized about 5 years ago with the creation of a group with the aim of deploying a local computational cluster. With gradual expansions and with the help of financial support from Empa and Eawag directors boards and departments, SNF and BAFU.

The cluster has now reached its final size of about 100 nodes, where about 15 different laboratories from Empa and Eawag perform their calculations.

Q: What kind of science are you doing using HPC?

The simulations done at Empa cover different subjects belonging to Empa and Eawag research programs in nanotechnology, adaptive material systems, biocompatible systems, environmental and social research, novel energy technologies. Among others, our cluster hosts: ab initio simulations of surface-supported molecular nanostructures for designing new devices; granular matter simulations; atmospheric transport and chemistry; aircraft and railway noise models; climate change and water scarsity simulations; solid oxide fuel cells with finite elements; ab initio simulation of dye sensitized solar cells; novel catalysts and hydrogen storage materials; imaging of disordered and porous media.

Q: What HPC systems are you operating?

Dell blades and 1U nodes with Woodcrest, Nehalem and AMD opteron processors with gigabit + infiniband interconnection. Storage with infortrend hardware and qlogic infrastructure, connected to the nodes with fiber-channel + infiniband. Linux-based operating system (centos + Parastation) and lustre for the storage. The racks are water cooled

The system was built with the key assistance of Juerg Schächtelin of Empa’s informatics department.

Q: What are your  biggest concerns about HPC?

The biggest concerns are about the human resources in system administration. We are currently facing such concerns in the following ways:

  1. Scientific administration and part of cluster administration is done in our group that also has important research duties in the laboratory “nanotech@surfaces”
  2. Hardware + cluster administration is gradually passed to informatics departments, with 2 personnel units (one at 50%, the other one at 20%).
  3. For the support we have a contract with partec (Germany) concerning parastation and parallel middleware, with remote help and installation of updates;
  4. We also have a collaboration with CSCS that helped us to install and deploy the lustre filesystem on our new storage.