Sourcing MPC in your Environment
# Source the mpcvars script # (located inside the bin/ directory of the MPC installation) # For csh and tcsh: source $MPC_PREFIX/mpcvars.csh # For bash and sh: source $MPC_PREFIX/mpcvars.sh # Check that everything went well which mpcrun which mpc_cc
Compiling with MPC
As MPC runs inside user-level threads you may want to compile your program privatized so that global variables do not collide when running inside user-level threads. Doing so is just a matter of recompiling your code thanks to the automatic privatization support inside MPC. However, if you want to run MPC in “process-based” you may use the regular MPI compilation wrappers.
Compiling with MPI wrappers
MPC provides the “regular” MPI wrappers allowing a drop-in replacement for existing MPI codes. Note that by default they do not enable privatization support and therefore the resulting code cannot run in user-level threads.
# For C code mpicc # For C++ code mpicxx mpic++ # For Fortran codes mpif77 mpifc mpif90
In order to enable privatization you simply have to export the MPI_PRIV environment variable as follows:
# Export in you shell export MPI_PRIV=1 mpicc foo.c # Export per command MPI_PRIV=1 mpicc
Compiling with MPC wrappers
In addition MPC provides compilation wrappers which enable privatization by default. These wrappers are prefixed with mpc_:
# For C code mpc_cc # For C++ code mpc_cxx # For Fortran codes mpc_f77
Running programs with MPC
Out of convenience MPC provides a drop-in replacement for mpirun which is only able to express process-based configurations. In order to run in thread-based one may use the mpcrun command instead. Note that the mpcrun command is to be preferred as mpirun is mostly here for compatibility.
Using mpcrun
Unlike regular MPI MPC defines both the number of UNIX processes (-p) and the number of MPI processes (-n) called tasks in our semantic to avoid confusion. Therefore, when you launch an MPC command you can achieve configurations specific to a thread-based runtime with multiple MPI tasks running in the same UNIX process. Below we sumarize some configurations:
# Two MPI tasks in a single UNIX process mpcrun -n=2 ./mpiprog # Two MPI processes and two MPI tasks (one per process) mpcrun -p=2 -n=2 ./mpiprog # Hybrid configuration two UNIX processes with two cores # and two MPI tasks # NOTE: the -c option gives the number of core per UNIX process (-p) mpcrun -p=2 -c=2 -n=4 ./mpiprog
Using mpirun
The mpirun launcher is intended mostly for compatibility it takes the following options:
mpirun MPC
USAGE: mpirun [OPTION]... [PROGRAM]...
-n|-p Number of processes to launch
-c Number of cores per process
--verbose Enable debug messages.
Leading to the following configurations:
# Two MPI processes (bound to UNIX process) mpirun -np 2 ./mpiprog mpirun -n 2 ./mpiprog mpirun -p 2 ./mpiprog # Two MPI processes with 2 cores each mpirun -n 2 -c 2 ./mpiprog