New Visualization Cluster Eiger Delivered at CSCS

The Visualization/Research & Development Cluster EIGER is a new CSCS facility which extends the current resource portfolio. During the Q2/Q3 2010 it will be fully integrated into the CSCS Supercomputing ecosystem, and will be opened to the swiss scientific user community for hybrid multicore/ multi-GPU computing, visualization, data analysis, and general purpose pre/post processing activities.

The Eiger mountain in the Swiss Alps …

… and the Eiger cluster in the CSCS machine room being assembled (the interconnect cables are still missing).

The EIGER cluster is a tightly coupled computing cluster system, running Novell SUSE Linux Enterprise Server 11 Operating System release and includes 19 nodes based on the dual-socket six-cores AMD Opteron 2427 processor architecture running at 2.2 GHz, offering 24 GB of main system memory per node, for a total of 228 cpu cores and 552 GB aggregate memory. 4 out of 19 cluster nodes offers a larger main system memory capacity up to 48 GB.

Altair PBS Professional V 10.3 is the main batch queuing system installed and supported on the cluster in order to let end-users access in a shared or reserved mode any available visualization/computing resource.

Several class of nodes have been defined inside the cluster, covering special functionalities :

  • Class 0: Administration Node (1x)
  • Class 1: Login Node (1x)
  • Class 2: Visualization Nodes (7x)
  • Class 3: Fat Visualization Nodes (4x)
  • Class 4: Advanced Development Nodes (4x)
  • Class 5: Storage Nodes (2x)

Depending on the node class membership, cluster nodes are equipped with one of the two kind of GPUs family products :

  • NVIDIA GeForce GTX 285 2 GB => Class 2/3 – soon to be extended with NVIDIA GeForce GTX 480 1.5 GB
  • NVIDIA TESLA S1070 GPUs 4 GB => Class 4 – soon to be extended with 2 upcoming new NVIDIA TESLA/FERMI S2070 6 GB

As an high speed network interconnect, the cluster EIGER rely on a dedicated Infiniband QDR fabric infrastructure, supporting both parallel-MPI traffic and the internal parallel scratch file system I/O data traffic. In addition, a commodity 10 GbE LAN ensures interactive login access, home, project and application file sharing among the cluster nodes, and a standard 1 Gbe administration network is also reserved for cluster management purposes.