Introducing a New Tool for Neural Network Profiling & Inference Experiments

If you use the Intel® Distribution of OpenVINO™ toolkit (even if you don’t … yet), the latest release introduces a new profiler tool to more easily run and optimize deep learning models.

Called Deep Learning Workbench, this production-ready tool enables developers to visualize key performance metrics such as latency, throughput, and performance counters for neural network topologies and their layers. It also streamlines configuration for inference experiments including int8 calibration, accuracy check, and automatic detection of optimal performance settings.

Join senior software engineer Shubha Ramani for an overview and how-to demos of DL Workbench, where she’ll cover:

  • How to download, install, and get started with the tool
  • Its new features, including model analysis, int8 and Winograd optimizations, accuracy, and benchmark data
  • How to run experiments with key parameters such as batch size, parallel streams, and more to determine the most optimal configuration for your application.

 

Get the software

Be sure to download the latest version of Intel® Distribution of OpenVINO™ toolkit so you can follow along during the webinar.

Shubha Ramani, Senior Software Engineer, Intel Corporation

Shubha is a senior software engineer whose specialties span all facets of deep learning and artificial intelligence. In her current role she focuses on the Intel® Distribution of OpenVINO™ toolkit, including helping customers use its full capabilities and building complex DL prototypes. Additionally, she helps customers embrace Intel’s world-class automotive driving SDKs and tools, and develops complex, real-world C++ samples using the Autonomous Driving Library for inclusion in Intel® GO™ automated driving solutions.

Shubha holds an MSEE in Embedded Systems Software from the University of Colorado at Boulder, and a BSEE in Electrical Engineering from Texas A&M University in College Station.

For more complete information about compiler optimizations, see our Optimization Notice.