CMake Cookbook
上QQ阅读APP看书,第一时间看更新

Detecting the MPI parallel environment

The code for this recipe is available at https://github.com/dev-cafe/cmake-cookbook/tree/v1.0/chapter-03/recipe-06 and has a C++ and C example. The recipe is valid with CMake version 3.9 (and higher) and has been tested on GNU/Linux, macOS, and Windows. In  https://github.com/dev-cafe/cmake-cookbook/tree/v1.0/chapter-03/recipe-06, we also provide a C example compatible with CMake 3.5.

An alternative and often complementary approach to OpenMP shared-memory parallelism is the Message Passing Interface (MPI), which has become the de facto standard for modeling a program executing in parallel on a distributed memory system. Although modern MPI implementations allow shared-memory parallelism as well, a typical approach in high-performance computing is to use OpenMP within a compute node combined with MPI across compute nodes. The implementation of the MPI standard consists of the following:

  1. Runtime libraries.
  2. Header files and Fortran 90 modules.
  1. Compiler wrappers, which invoke the compiler that was used to build the MPI library with additional command-line arguments to take care of include directories and libraries. Usually, the available compiler wrappers are mpic++/mpiCC/mpicxx for C++, mpicc for C, and mpifort for Fortran.
  1. MPI launcher: This is the program you should call to launch a parallel execution of your compiled code. Its name is implementation-dependent and it is usually one of the following: mpirunmpiexec, or orterun.

This recipe will show how to find a suitable MPI implementation on your system in order to compile a simple MPI "Hello, World" program.