Course on Advanced Distributed Memory Parallel Programming: MPI-2.2, MPI-3.0 and PGAS at CSCS

CSCS just published a call for participation to the following course:

Advanced Distributed Memory Parallel Programming: MPI-2.2, MPI-3.0 and PGAS
23-25 May, 2012 at CSCS, Lugano

The goal of this training workshop is to introduce MPI-2.2 performance critical topics and to provide an overview of MPI 3.0, MPI for hybridcomputing and Partitioned Global Address Space (PGAS) languages, Coarray Fortran and Unified Paralell C (UPC). For lab sessions, Cray XK6, a massively parallel processing (MPP) platform with GPUs and a QDR InfiniBand cluster with Intel processors and GPUs will be targeted.

Attendees are encouraged to bring in their own applications and codes for the hands-on sessions. Representatives from MPI 3.0 forum (http://meetings.mpi-forum.org/MPI_3.0_main_page.php) and Cray PE will be present at the meeting for discussions and feedback.  There will be invited talks where presenters share their experiences and discuss issues in using MPI and PGAS on the CSCS systems.

Registration deadline: May 18, 2012.

Event and registration details are available here »

Speakers

Instructors

  • Torsten Hoefler (University of Illinois at Urbana-Champaign)
  • Roberto Ansaloni (Cray Inc.)

Invited speakers

  • Romain Teyssier, University of Zurich
  • Angelino Paolo, EPFL
  • Käppeli  Roger, ETHZ
  • Will Sawyer, CSCS

Tentative agenda

First Day (May 23, 2012)

09.30 Welcome
09.40 Introduction to Advanced MPI Usage
10.00 MPI data types (details and potential for productivity and performance with several examples)
10.30 Break
11.00 Contd. MPI data types (details and potential for productivity and performance with several examples)
11.30 Nonblocking and Collective communication (including nonblocking collectives, software pipelining, tradeoffs and parametrization)
12.15 Lunch
13.30 User talks and discussion
14.30 Lab (MPI data types, non-blocking and collective communication)
15.00 Break
15.30 Contd. Lab
17.00 Wrap up

Second Day (May 24, 2012)

09.00 Topology mapping and Neighborhood Collective Communication
09.45 One sided communication (MPI-2 and MPI 3.0)
10.30 Break
11.00 One sided communication (MPI-2 and MPI 3.0)
11.30 MPI and hybrid programming primer (OpenMP, GPU, accelerators, MPI 3.0 proposals)
12.00 Lunch
13.30 User talks and discussion
14.30 Lab (Topology mapping, collective communication, one-sided communication)
15.00 Break
15.30 Lab and feedback on MPI 3.0 proposal
17.00 Wrap up

Third Day (May 25, 2012)

09.00 CAF and UPC introduction (portability and performance)
10.00 Cray programming environment and PGAS compilers
10.30 Break
11.00 Cray performance tools for MPI and PGAS code development and tuning
11.30 User talk
12.00 Lunch
13.30 Lab
15.00 Wrap up