Artificial Intelligence: Cloud to Edge Acceleration

Drones helping farmers decide where and how much fertilizer and pesticide to apply. Wall Street using machine learning for stock trading. Devices performing simultaneous language translation an international conference. Seventeen years of computer science and literature catalogued by a million vCPUs in a public cloud. Thank you, AI.

Intel takes a holistic approach towards AI, particularly towards machine learning, providing both hardware and software to support AI from the Cloud to the Edge. Intel CPUs, accelerators, and software tools fuel deep learning performance, particularly for data scientists and application developers.

In this video, Henry Gabb and Wei Li, Vice President and General Manager for Machine Learning and Translation at Intel, discuss the challenges and opportunities AI brings, and what Intel is doing about both. Watch as they touch on several important topics:

  • How accelerators—Intel® FPGAs, Intel® Movidius™, and the Intel® Nervana™ Neural Network Processor (Intel® Nervana™ NNP)—fit into Intel’s training and inference strategies
  • The three components to Intel’s software strategy for frameworks, namely for Google’s Tensorflow* and Caffe* and MXNet*
  • How BigDL enables big data and deep learning efficiencies inside the Spark* environment
  • How the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) can reduce training time from weeks to hours
  • The roles Intel® Machine Learning Scaling Library (Intel® MLSL) and the nGraph Library play in creating optimizations when applied
Wei Li, Vice President & General Manager of Machine Learning & Translation, Core & Visual Computing Group, Intel Corporation

Wei Li is vice president in the Software and Services Group and general manager of Machine Learning and Translation at Intel Corporation, responsible for several areas of software systems, including machine learning, binary translation, and emulation. His team works with industry and academia to enable the software ecosystem, and collaborates with Intel hardware teams designing future processor products. Since joining Intel in 1998, Wei has led teams that contributed to Intel data center, client/mobile, Internet of Things, and artificial intelligence businesses. He holds 11 U.S. patents, and has served as an associate editor for ACM Transactions on Programming Languages and Systems. Wei earned a Ph.D. in computer science from Cornell University, completed  the Executive Accelerator Program at the Stanford Graduate School of Business, and he taught computer science at Stanford University. 

Henry Gabb, Sr. Principal Engineer, Intel Corporation

Henry is a senior principal engineer in the Intel Software and Services Group, Developer Products Division, and is the editor of The Parallel Universe, Intel’s quarterly magazine for software innovation. He first joined Intel in 2000 to help drive parallel computing inside and outside the company. He transferred to Intel Labs in 2010 to become the program manager for various research programs in academia, including the Universal Parallel Computing Research Centers at the University of California at Berkeley and the University of Illinois at Urbana-Champaign. Prior to joining Intel, Henry was Director of Scientific Computing at the U.S. Army Engineer Research and Development Center MSRC, a Department of Defense high-performance computing facility. Henry holds a B.S. in biochemistry from Louisiana State University, an M.S. in medical informatics from the Northwestern Feinberg School of Medicine, and a PhD in molecular genetics from the University of Alabama at Birmingham School of Medicine. He has published extensively in computational life science and high-performance computing. Henry recently rejoined Intel after spending four years working on a second PhD in information science at the University of Illinois at Urbana-Champaign, where he established an expertise in applied informatics and machine learning for problems in healthcare and chemical exposure.

Essentials
Textured swirl

Accelerating AI from the Edge to the Cloud

ON DEMAND • 60:41 mins
Watch now
Essentials

Accelerating Small Matrix Multiplication in Compute-Intense Applications

ON DEMAND • 57:00 mins
Watch now
See all webinars
Article
Computer chip apache spark

BigDL: Optimized Deep Learning on Apache Spark*

Read more
Article
Harp-DAAL

Harp-DAAL for High-Performance Big Data Computing

Read more
See all articles
For more complete information about compiler optimizations, see our Optimization Notice.