The hardware and software resources needed to support inferencing on deep neural networks can be substantial. So much so, in fact, that squeezing every ounce of compute resources to accelerate AI inferencing has become the new normal for developers and users.
Enter Intel® Deep Learning Boost (Intel® DL Boost), an AI instruction set for DL workloads that can deliver significant performance increases—efficiency and speed—for DL inference workloads running on Intel® architecture.
Join technical consulting engineer Preethi Venkatesh to learn about Intel DL Boost technology and how to take advantage of it. Topics include:
- An overview of the technology, including a key feature called the Vector Neural Network Instructions (VNNI), which speeds delivery of inference results
- How Intel DL Boost extends Intel® Advanced Vector Extensions 512 (Intel® AVX-512) operations while maximizing the use of compute resources
- How Intel tools and frameworks like the Intel® Distribution of OpenVINO™ toolkit and Intel® Optimization for TensorFlow* help you optimize your AI code and realize the performance benefits of VNNI
Get the software
- Intel® Optimization for TensorFlow*
- Intel® Distribution of OpenVINO™ toolkit, which includes the Intel® Math Kernel Library for Deep Neural Networks
- Intel® oneAPI DL Framework Developer Toolkit (beta), which includes the Intel® oneAPI Deep Neural Network Library
- Intel® AI Analytics Toolkit (beta), which includes Intel-optimized TensorFlow*, PyTorch* and Python
More resources