High Performance Computing: Principles Of Parallel Programming
The computer industry has moved en masse to multicore; parallel programming is now the pervading programming paradigm from desktop to supercomputer. Industry needs programmers with parallel programming skills.
This is a practically oriented introduction to parallel computing with a significant parallel programming component.
Lectures will cover:
- basic concepts and terminology;
- parallel computer memory architectures;
- parallel programming models;
- designing parallel programs including performance analysis and tuning;
- future directions in parallel programming.
Practical parallel programming will be in C, using OpenMP and MPI, on a large multi-core cluster running under Unix and in CUDA on NVIDIA GPUs running under Windows.
A major component of the module will be the analysis of a serial application program. The aim is to determine its hot spots and to develop parallel implementations in OpenMP, MPI and CUDA. This project will take the form of competition with a prize for the best performing parallel system.
- To understand the principles underpinning effective and efficient parallel programs.
- To understand those factors that influence performance.
- To appreciate currents trends in parallel computing.
- To understand the basics of C, MPI, OpenMP and CUDA.
Students will be able:
- to analyse and identify the hot spots in a serial application program;
- to design, implement, experiment with and analyse parallel versions of the application program in OpenMP, MPI and CUDA;
- to present coherently the results of experiments and findings in both written and oral form;
- to identify, understand and appreciate high level journal papers on parallel computing.