Data Parallelism—aka data parallel compute—is no longer a new thing. It is THE programming model for most compute-intense applications and solutions running on multicore systems, including those that drive AI, machine learning, and video processing.
And according to Intel Senior Fellow Geoff Lowney, it will likely remain the dominant compute pattern for the next 10 years.
The challenge, then, is helping developers express parallelism more easily across the expanse of hardware architectures—CPUs for sure, but also GPUs and FPGAs and VPUs and IPUs and … you get the picture. To do this, a new language is needed.
That language is Data Parallel C++ (DPC++), a key part of Intel’s oneAPI initiative and an extension of familiar C++ that enables new ways to express parallelism for cross-architecture development.
In this 12-minute video, Geoff sits down with Tech.Decoded to discuss DPC++ and what you need to know, including:
- Does DPC++ require separate host and kernel code?
- Why use DPC++ for heterogeneous parallelism vs adopting OpenCL™ or CUDA*?
- Do my legacy C++ programs need updating to take advantage of DPC++? If so, how much?
- Can I combine DPC++, Threading Building Block, Parallel STL, and OpenMP* in the same program?
- Will DPC++ features eventually become part of the C++ standard?
Watch.
Get Started Now
- Visit the Intel® oneAPI beta website to learn about this initiative, including DPC++, free software toolkits, a cloud-based development sandbox, training, industry partners, and more.
- Try your code in the Intel® DevCloud—Sign up to develop, test, and run your solution in this free development sandbox with access to the latest Intel® hardware and oneAPI software—including DPC++. No downloads. No configuration steps. No installations.