Performance & Scalability Analysis of CNN-based Deep Learning Inference in the Intel® Distribution of OpenVINO™ toolkit

Convolutional neural networks (CNNs) are powerful techniques for AI application development, offering the advantage of accuracy in image-recognition problems. In this talk, Intel software engineer Dmitry Matveev analyzes the performance and scalability of several software development tools that each provide high-performance CNN-based deep learning inference on Intel® architecture.

In just under 30 minutes, Dmitry focuses on two typical data science problems: Image Classification1 and Object Detection2.

The experiment plan:

  1. Prepare a set of trained models for several dev tools, including Intel® Distribution of OpenVINO™ toolkit, Intel® Optimization of Caffe*, and OpenCV.
  2. Select a large set of images from each dataset to ensure the performance analysis delivers accurate results; experimentally determine the most appropriate parameters (e.g., batch size and the number of CPU cores used).
  3. Carry out computational experiments on Endeavor, NASA’s shared-memory supercomputer based on 2nd Generation Intel® Xeon® Scalable Processors (formerly Cascade Lake).

 

Leveraging the above experiment, this session covers:

  • OpenVINO™ toolkit performance, including comparing it to other similar software for CNN-based deep learning inference.
  • Analysis of OpenVINO toolkit scaling efficiency using dozens of CPU cores in a throughput mode.
  • Results of Intel® AVX-512 VNNI (Vector Neural Network Instructions) performance acceleration in Intel Xeon Scalable Processors.
  • Analysis of modern CPU utilization in CNN-based deep learning inference using the Roofline model included in Intel® Advisor.

Check it out.

Download the software

1 Image Classification model: ResNet-50; dataset: ImageNET
2 Object Detection model: SSD300; dataset: PASCAL VOC 2012

Dmitry Matveev, Software Engineering Manager, Intel Corporation

Dmitry is a software engineering manager whose focus is deep learning application development and optimization. Prior to joining Intel in 2016, he honed his software knowledge—from functional programming and object-oriented analysis and design to domain-specific languages, digital signal processing, and machine learning—at companies including MERA, SoftDrom, and Itseez. Dmitry holds a Masters in Computer Science from Nizhniy Novgorod State Technical University.

For more complete information about compiler optimizations, see our Optimization Notice.