If you develop Computer Vision applications, a good place to be is at the intersection of fast and portable inference—knowing the techniques that make deep neural networks (DNNs) deliver quick, accurate deductions … and ensuring extensibility across a variety of platforms. And choosing a DNN based on design budget, coupled with pretrained-model trial runs using the Intel® Distribution of OpenVINO™ toolkit† to validate heterogeneous performance, is a great way to start.
This webinar will give you a jumpstart on all that. Specifically you’ll learn:
- Basic principles of assessing CNN cost (e.g., MobileNet* vs GoogLeNet* vs VGG*)
- Algorithmic optimization techniques to improve network performance, including network compression techniques such as network pruning, low precision, and sparsity
- Effective techniques to improve inference speed and portability, including a comparison of the classic approach vs. a more accurate approach to considering flops, parameters, compute, memory, and heap size
- How smaller heap size keeps more data closer to compute, runs faster, uses less power, and how the OpenVINO toolkit’s Model Optimizer makes it run ever better
- How the complexities of pretrained models can be used to create fast and portable new models.
†The Intel® Distribution of OpenVINO™ toolkit (short for Open Visual Inference & Neural Network Optimization) fast-tracks the development of vision applications from edge to cloud
OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.