The Multi-Processor Computing runtime (MPC) is a unified runtime targeting the MPIOpenMP and PThread standards. One of its main specificity is the ability to run MPI Processes inside threads, being a thread-based MPI.

MPC Supports the MPI 3.1 standard in both thread-based and process-based flavors and was optimized for the Infiniband and Portals 4 networks. In MPC, MPI_THREAD_MULTIPLE is always on enabling the transparent use of shared-memory parallelism with MPI.

MPC comes with a suite of privatizing compilers which enable the transparent port of existing codes to the thread-based execution paradigm. To do so, MPC extended the support of TLS which are now hierarchical.

Running inside threads enables lower latencies inside nodes (as messages are direct memcpys). It requires fewer communication endpoints, one per node instead of one per core, leading to memory and launch time improvements.

Latest News

The MPC 4.0.0 release of MPC has undergone a massive refactor in order to improve its modularity. It also brings
The MPC team is participating to the SC20 online events, both as member of the Architecture and Network committee and
The MPC team will participate in the September 2020 Virtual MPI Forum Meeting, which replaces the September 2020 MPI-Forum face-to-face
The MPC team will be at the online edition of IWOMP 2020 and will present a paper titled "Preliminary Experience
The MPC team will participate to the online edition of the EuroMPI 2020 conference.The team will present a paper titles
The MPC team participated in the virtual MPI Forum Meeting, which took place instead of the June 2020 MPI-Forum face-to-face
MPC v3.4.0 is now available! Here are the new additions: Active MessageIntegration of custom Active Messages relying on gRPC approachMPI:Optimizations
The MPC team will be in Portland, OR, for the February 2020 MPI-Forum face-to-face meeting!
MPC is at Seattle, Washington, USA, for the SIAM PP 2020 conference. Please, come join us at MiniSymposium MS8, entitled
Today, a training on MPC specific features for efficient MPI+X programming takes place at the Ter@tec Building, in Bruyères-le-Châtel, France.