Exascale in Sight: MPI Communication Layer Migration Benefits

It’s a mere billion billion calculations per second … and it’s the future of human-brain-equivalent processing power. Find out how Intel® MPI Library can help push your applications into the new frontier.

Deliver flexible, efficient, scalable cluster messaging with the Intel® MPI Library—which implements the high-performance MPI-3.1 standard on multiple fabrics (matching the exascale MPICH CH4 codebase used by Argonne National Labs).

CH4 is designed for low software overheads to better exploit next-generation hardware. This change enables new capabilities in your MPI programs, reduced latency, and new programming models such as multi-endpoint MPI, which saturates the fabric, reduces memory usage per MPI rank, and produces multi-threaded MPI performance comparable to single threaded options.

Watch to learn about this and more, including:

  • How Intel MPI Library lets you quickly change or upgrade to new interconnects without requiring changes to the application or user-level operating environment
  • Benefits for application performance and simplifying the user experience from the underlying code
  • How to develop applications that can run on multiple cluster interconnects chosen by the user at runtime

Download Intel MPI Library for free.

James Tullos, Technical Consulting Engineer, Intel Corporation

James joined Intel in 2012, and is a Technical Consulting Engineer supporting Intel® Software Development Products. He focuses on parallel performance, primarily in HPC and Cluster environments, training customers to get the most from Intel Software Tools. His background is in aerospace engineering, with previous work on propulsion system analysis programs. James has a BS in Aerospace Engineering from Mississippi State University, and a Master’s of Science in Aeronautical and Astronautical Engineering from Purdue University. In the mythical spare time, he also enjoys reading, video games, and “random whatever”.

For more complete information about compiler optimizations, see our Optimization Notice.