BigDL v0.9.0 Release Notes

Release Date: 2019-07-22 // 3 months ago
  • Highlights

    ✨ Continue VNNI acceleration support, we add optimization for more CNN models including object detection models, enhance model scales generation support for VNNI.

    ➕ Add attention based model support, we add Transformer implementation for both lanuage model and translation model.

    🐎 RNN optimization, We support LSTM integration with MKL-DNN which acheives ~3x performance speedup.

    Details

    • 👍 [New Feature] Add attention layer support
    • 👍 [New Feature] Add FeedForwardNetwork layer support
    • 👍 [New Feature] Add ExpandSize layer support
    • 👍 [New Feature] Add TableOperation layer to support table calculation with different input sizes
    • 👍 [New Feature] Add LayerNormalizaiton layer support
    • 🌐 [New Feature] Add Transformer support for both language and translation models
    • 👍 [New Feature] Add beam search support in Transformer model
    • [New Feature] Add Layer-wise adaptve rate scaling optim method
    • 👍 [New Feature] Add LSTM integration with MKL-DNN support
    • 👍 [New Feature] Add dilated convolution integration with MKL-DNN support
    • [New Feature] Add parameter process for LarsSGD optim method
    • 👍 [New Feature] Support Affinity binding option with mkl-dnn
    • 🏗 [Enhancement] Document enhancement for configuration and build
    • 0️⃣ [Enhancement] Reflection enhancement to get default values for constructor parameters
    • [Enhhancement] User one AllReducemParameter for multi-optim method training
    • 👍 [Enhancement] CAddTable layer enhancement to support input expansion along specific dimension
    • [Enhancement] Resnet-50 preprocessing pipeline enhancement to replace RandomCropper with CenterCropper
    • [Enhancement] Calculate model scales for arbitrary mask
    • [Enhancment] Enable global average pooling
    • [Enhancement] Check input shape and underlying MKL-DNN layout consistency
    • 👻 [Enhancement] Threadpool enhancement to throw proper exception at executor runtime
    • 👍 [Enhancement] Support mkl-dnn format conversion from ntc to tnc
    • [Bug Fix] Fix backward graph generation topology ordering issue
    • [Bug Fix] Fix MemoryData hash code calculation
    • 🌲 [Bug Fix] Fix log output for BCECriterion
    • [Bug Fix] Fix setting mask for container quantization
    • 👷 [Bug Fix] Fix validation accuracy issue when multi-executor running with the same worker
    • [Bug Fix] Fix INT8 layer fusion between conlution with multi-group masks and BatchNormalization
    • [Bug Fix] Fix JoinTable scales generation issue
    • [Bug Fix] Fix CMul forward issue with special input format
    • [Bug Fix] Fix weights change issue after model fusion issue
    • [Bug Fix] Fix SpatinalConvolution primitives initializaiton issue

Previous changes from v0.8.0

  • Highlights

    • ➕ Add MKL-DNN Int8 support, especially for VNNI acceleration support. Low precision inference accelerates both latency and throughput significantly
    • ➕ Add support for runnning MKL-BLAS models under MKL-DNN. We leverage MKL-DNN to speed up both training and inference for MKL-BLAS models
    • ➕ Add Spark 2.4 support. Our examples and APIs are fully compatible with Spark 2.4, we released the binary for Spark 2.4 together with other Spark versions

    Details

    • 👍 [New Feature] Add MKL-DNN Int8 support, especially for VNNI support
    • 👍 [New Feature] Add support for runnning MKL-BLAS models under MKL-DNN
    • 👍 [New Feature] Add Spark 2.4 support
    • [New Feature] Add auto fusion to speed up model inference
    • 👍 [New Feature] Memoery reorder support for low precision inference
    • 👍 [New Feature] Add bytes support for DNN Tensor
    • [New Feature] Add SAME padding in MKL-DNN layers
    • [New Feature] Add combined (add/or) triggers for training completion
    • 👍 [Enhancement] Inception-V1 python training support enhancement
    • ⚡️ [Enhancement] Distributed Optimizer enhancement to support customized optimizer
    • 👍 [Enhancement] Add compute output shape for DNN supported layers
    • [Enhancement] New MKL-DNN computing thread pool
    • 👍 [Enhancement] Add MKL-DNN support for Predictor
    • 📚 [Enhancement] Documentation enhancement for Sparse Tensor, MKL-DNN support, etc
    • [Enhancement] Add ceilm mode for AvgPooling and MaxPooling layers
    • 👍 [Enhacement] Add binary classification support for DLClassifierModel
    • 👍 [Enhacement] Improvement to support conversion between NHWC and NCHW for memory reoder
    • [Bug Fix] Fix SoftMax layer with narrowed input
    • 👍 [Bug Fix] TensorFlow loader to support checking all data types
    • 👍 [Bug Fix] Fix Add operation bug to support double type when loading TensorFlow graph
    • ⚡️ [Bug Fix] Fix one-step weight update missing issue in validation during training
    • 🔒 [Bug Fix] Fix scala compiler security issue in 2.10 & 2.11
    • [Bug Fix] Fix model broadcast cache UUID issue
    • [Bug Fix] Fix predictor issue for batch size == 1