Want to Flip Traditional HPC Modeling and Simulation? Enter Etalumis.

Probabilistic programming languages (PPLs) continue to receive attention for performing Bayesian inference in complex generative models. Trouble is, there remains a dearth of PPL-based science applications because it’s impractical to rewrite complex scientific simulators in a PPL, inference comes with high computational cost, and there is a lack of scalable implementations.

Enter Etalumis (“simulation” spelled backwards), a new system that uses Bayes inference to improve existing simulators via machine learning.

In this session, Lei Shao, Intel Deep Learning Software Engineer, presents the novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo methods and deep-learning-based inference compilation (IC) engines for tractable inference.

To guide IC inference, she:

  • Performs distributed training of a dynamic 3D CNN-LSTM architecture with a PyTorch*-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with global minibatch size of 128k (Result: achieves performance of 450 Tflop/s through PyTorch enhancements.
  • Demonstrates a Large Hadron Collider use case with the C++ SHERPA (Simulator for Human Error Probability Analysis) simulator
  • Achieves the largest-scale posterior inference in a Turing-complete PPL

Download the software

More resources

Lei Shao, Deep Learning Software Engineer, Intel Corporation

Lei Shao is an industry-leading expert in machine learning and large scale distributed deep learning, including 20+ patents and a myriad of publications. Lei joined Intel in 2003 and holds a PhD in Electrical Engineering from University of Washington in Seattle.

For more complete information about compiler optimizations, see our Optimization Notice.