OpenMP* 5.0: A Story about Threads and Tasks

Over its 21 years of existence, the OpenMP* API has evolved into becoming THE standard programming model for multi-threading in HPC applications.

The reason is simple: OpenMP makes it easy for developers to learn and write parallel code, and exploit the power of modern computers.

In November 2018, the OpenMP Architecture Review Board (ARB) released version 5.0 of the API specification—a major improvement with more powerful parallelization features for modern multi-threaded applications.

So what can you expect from 5.0?

Join Michael Klemm, Intel Senior Application Engineer and CEO of the OpenMP ARB, to find out, including:

  • A peek under the hood: how OpenMP works and what the future holds
  • A review of the new features for multi-threading, including improvements to tasking—e.g., task reductions, task affinity, task dependencies—and how they extend the current API
  • A discussion about the new scalable memory allocator API to control memory hierarchy

Get the Software
OpenMP 5.0 support is available in Intel® C Compilers and Intel® Fortran Compilers. Get them both in Intel® Parallel Studio XE. Try it free for 30 days now.

See Answers to Questions Asked During the Live Webinar
As happens on occasion, more questions were asked than the presenters had time to answer during the live session. So … we’ve curated them, gotten the answers, and rolled it all up in a Q&A webpage you can reference at your convenience (including links to key resources you may want to bookmark).

Michael Klemm, Senior Application Engineer, Intel Corporation

Michael is a Senior Software Engineer focused on high-performance and throughput computing. Joining Intel in 2008, his responsibilities include performance analysis and optimization, software development and porting, benchmarking, customer support, platform evangelization, and customer training.

But that’s not all. Adding to the mix his additional interests— compiler construction, design of programming languages, parallel programming, and performance analysis and tuning—and Michael’s technology range is exceptionally broad. He is also an Intel representative in the OpenMP Language Committee (leading the efforts to develop error handling features for OpenMP) and is also the CEO of the OpenMP Architecture Review Board.

Michael holds a Master’s of Science in Computer Science from University of Erlangen-Nuremberg and a PhD in Engineering from Friedrich-Alexander University, Erlangen-Nuremberg, with focus on Compiler Construction, Cluster Computing, and High-Performance Computing.

For more complete information about compiler optimizations, see our Optimization Notice.