Expressing Parallelism in C++ with Threading Building Blocks

Intel® Threading Building Blocks (Intel® TBB) is a widely used C++ library for shared-memory parallel programming. With Intel TBB, you can parallelize computationally intensive work—including heterogeneous computing—in a fast, portable, and scalable way without having to delve in to the low-level details of threading.

In this Essentials webinar, you will:

  • Get a comprehensive introduction the TBB library, including what’s new
  • Learn about its important features, including high-level generic parallel patterns, a flow graph interface for expressing dependence and data flow graphs, concurrent containers, a scalable memory allocator, a work-stealing task scheduler, and more
  • Discover how to use the standard interfaces of Intel’s Parallel STL library (which uses Intel TBB as its parallel execution engine) to unleash the power of multicore and vector parallelism

Intel TBB is available as a free, standalone download and also included as part of Intel® Parallel Studio XE  and Intel® System Studio, both of which can be tried and used for free.

Mike Voss, Principal Engineer, Visual Computing and Core Group, Intel Corporation

Mike is with the Developer Products Division at Intel and was the original architect of the Intel® Threading Building Blocks (Intel® TBB) flow graph API, a C++ API for expressing dependency, streaming, and data flow applications. He has co-authored over 40 published papers and articles on topics related to parallel programming, and frequently consults with customers across a wide range of domains to help them effectively use the threading libraries provided by Intel. He is currently championing the use of extensions to Intel TBB that enable software developers to coordinate the use of heterogeneous compute resources such as CPUs, integrated GPUs, FPGAs, and other domain-specific accelerators, and is one of the lead developers of Flow Graph Analyzer, a graphical tool for analyzing data flow applications targeted at both homogeneous and heterogeneous platforms. Mike earned PhD and MSEE degrees in Electrical and Computer Engineering from Purdue University.

For more complete information about compiler optimizations, see our Optimization Notice.