Intel® oneAPI Threading Building Blocks: Optimizing for NUMA Architectures

Threading Building Blocks (TBB) is a high-level C++ template library for parallel programming that was developed as a composable, scalable solution for multicore platforms. And in the realm of HPC, current multi-socket Non-Uniform Memory Access (NUMA) systems are used with OpenMP*.

But things have changed.

Increasingly many independent software components require parallelism within a single application, especially in AI and video processing/rendering domains. In such environments, performance may degrade without allowing for composability with other components.

Result? Many developers have pulled TBB into NUMA environments … a complex task for even the most seasoned programmers.

Which is why Intel is working to simplify the approach.

Join senior software development engineer Alexei Katranov to learn about Intel’s current work and success stories, and to engage in a lively discussion covering:

  • A brief overview of NUMA architectures, TBB, and Intel® oneAPI Threading Building Blocks beta (aka “oneTBB”, the oneAPI-optimized version of this award-winning library)
  • How to use oneTBB features to tune NUMA systems
  • Samples from the FREE e-book Pro TBB: C++ Parallel Programming with Threading Building Blocks
  • A sneak peek at upcoming changes to TBB that will make tuning performance on NUMA systems even easier
  • A demonstration on combining TBB features to get great performance on a multi-socket NUMA server

Get the software

More Resources

  • Visit the Intel oneAPI Beta website to learn about this initiative, including downloading the essential Intel® oneAPI Base Toolkit which includes oneTBB.
  • Try your code in the Intel® DevCloud—Sign up to develop, test, and run your solution in this free development sandbox with access to the latest Intel® hardware and oneAPI software. No downloads. No configuration steps. No installations.
Alexei Katranov, Software Development Engineer, Intel Corporation
For more complete information about compiler optimizations, see our Optimization Notice.