Accelerate AI Inferencing from Development to Deployment

The hardware and software resources needed to support inferencing on deep neural networks can be substantial. So much so, in fact, that squeezing every ounce of compute resources to accelerate AI inferencing has become the new normal for developers and users.

Enter Intel® Deep Learning Boost (Intel® DL Boost), an AI instruction set for DL workloads that can deliver significant performance increases—efficiency and speed—for DL inference workloads running on Intel® architecture.

Join technical consulting engineer Preethi Venkatesh to learn about Intel DL Boost technology and how to take advantage of it. Topics include:

  • An overview of the technology, including a key feature called the Vector Neural Network Instructions (VNNI), which speeds delivery of inference results
  • How Intel DL Boost extends Intel® Advanced Vector Extensions 512 (Intel® AVX-512) operations while maximizing the use of compute resources
  • How Intel tools and frameworks like the Intel® Distribution of OpenVINO™ toolkit and Intel® Optimization for TensorFlow* help you optimize your AI code and realize the performance benefits of VNNI

Get the software

More resources

Preethi Venkatesh, Technical Consultant Engineer, Intel Corporation

Preethi is a Technical Consulting Engineer focused on helping customers use and adopt the Intel® Distribution for Python* and Intel® Data Analytics Acceleration Library through training, article publication, and open-source contributions. She joined Intel in 2017, coming from a 4-year tour at Infosys Limited where she was a Business Data Analyst.

Preethi has a bachelor’s degree in Instrumentation Technology from Visvesvaraya Technological University, Belgaum, India and a master’s degree in information systems on Data Science from University of Texas at Arlington.

For more complete information about compiler optimizations, see our Optimization Notice.