Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Updated the release notes from https://github.com/apache/incubator-mxnet/blob/v1.4.x/NEWS.md

...

  1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection
  2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use cases where you have massive amounts of data and want to run inference or pre-compute inference results
    Real-time Inference is often performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc., all of which use Java.
    Batch Inference is often performed on big data platforms such as Spark using Scala or Java.

...

More details can be found at the the Java Inference API document.


Julia API 

MXNet.jl is the Julia package of jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight highlights of features include:

  • Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes.
  • Flexible manipulation of symbolic

...

  • to composite

...

  • for construction of state-of-the-art deep learning models.

Control Flow Operators (experimental)

...

  • Models are expressed with control flow, such as conditions and loops;
  • NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches;
  • Models may want to use more dynamic data structures, such as lists or dictionaries.
    It's natural to express dynamic models in frameworks with an imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this kind of interface, developers can use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly dependent on the originating front-end programming language (mainly Python). A model implemented in one language can only run in the same language.

...

More information can be found at at Optimize dynamic neural network models with control flow operators


SVRG Optimization


SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013. It is an optimization technique that complements SGD.

SGD is known for large scale optimization, but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate.

SVRG remedies the slow convergence problem by keeping a version of the estimated weights that is close to the optimal parameters and maintains the average of the full gradient over the full pass of data. The average of the full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions; a detailed proof can be found in section 3 of the paper. SVRG uses a different update rule than SGD: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data.

...

Subgraph API (experimental)

MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these backends support a limited number of operators, so running computation in a model usually involves an interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements:

TVM , MKLDNN and nGraph use customized data formats. Interaction between these backends with MXNet requires data format conversion.

...

TVM, MKLDNN, TensorRT and nGraph fuses operators.

...

Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or nGraph operators. In this way, MXNet converts data formats only when entering such a subgraph, and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and Subgraph API.

...

  • Users have to track the MXNet objects manually and remember to call dispose. This is not Java idiomatic and not user friendly. Quoting a user: "this feels like I am writing C++ code which I stopped ages ago".
  • Leads to memory leaks if dispose is not called.
  • Many objects in MXNet-Scala are managed in native memory, needing to use dispose on them as well.
  • Bloated code with dispose() methods.
  • Hard to debug memory-leaks.
    Goals of the project are:
  • Provide MXNet JVM users automated memory management that can release native memory when there are no references to JVM objects.
  • Provide automated memory management for both GPU and CPU memory without performance degradation. More details can be found here: JVM Memory Management

Distributed Training with Horovod

Horovod is an open source distributed deep learning framework built with high performance communication primitive. It can significantly improve the scaling efficiency when training in distributed environment. Compared to the Parameter Server approach, training with Horovod does not need standalone instances to host parameter servers to achieve the same or even better performance, which can save costs for customers.

More design details can be found here: Horovod-MXNet Integration

Horovod PR to support MXNet: Horovod support for MXNet framework


Topology-aware AllReduce (experimental)

...

More details can be found here: Topology-aware AllReduce
Note Note: This is an experimental feature and has known problems - see 13341. Please help to contribute to improve the robustness of the feature.

...

MKLDNN backend: Graph optimization and Quantization (experimental)

Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release (#12530#13297#13260).
These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for inference on platforms with supported Intel CPUs.

Graph Optimization

MKLDNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN.

Quantization

Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc.

...