You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

New Features - Export MXNet models to ONNX format

  • With this feature, now MXNet models can be exported to ONNX format.(#11213 ) Currently, MXNet supports ONNX v1.2.1 . API documentation.
  • Checkout this example which shows how to use MXNet to ONNX exporter APIs. ONNX protobuf so that those models can be imported in other frameworks for inference.

New Features - Topology-aware AllReduce

  • This features uses trees to perform the Reduce and Broadcast. It uses the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it. This topology aware approach reduces the existing limitations for single machine communication shown by mehods like parameter server and NCCL ring reduction. It is an experimental feature (#11591 )
  • Paper followed for implementation: Optimal message scheduling for aggregation.

New Features - Clojure package (experimental)

  • MXNet now supports the Clojure programming language. The MXNet Clojure package brings flexible and efficient GPU computing and state-of-art deep learning to Clojure. It enables you to write seamless tensor/matrix computation with multiple GPUs in Clojure. It also lets you construct and customize the state-of-art deep learning models in Clojure, and apply them to tasks, such as image classification and data science challenges.(#11205)
  • Checkout examples and API documentation here.

New Features - TensorRT Runtime Integration (experimental)

  • TensorRT provides significant acceleration of model inference on NVIDIA GPUs compared to running the full graph in MxNet using unfused GPU operators. In addition to faster fp32 inference, TensorRT optimizes fp16 inference, and is capable of int8 inference (provided the quantization steps are performed). Besides increasing throughput, TensorRT significantly reduces inference latency, especially for small batches.
  • This feature in MXNet now introduces runtime integration of TensorRT into MxNet, in order to accelerate inference. (#11325)
  • Currently, its in contrib package.

New Features - Sparse Tensor support for Gluon

New Features - Fused RNN Operators for CPU

  • MXNet already provides fused RNN operators for users to run on GPU with CuDNN interfaces. But there was no support to use these operators on CPU.
  • Now with this release, MXNet provides these operators for CPUs too!
  • Fused RNN operators added for CPU: LSTM(#10104), vanilla RNN(#10104), GRU(#10311)


New Features - Control flow operators

  • This is the first step towards optimizing dynamic neural networks by adding symbolic and imperative control flow operators. Proposal 
  • New operators introduced: foreach(#11531), while_loop(#11566), cond(#11760).

MKL-DNN



Breaking Changes


Big-fixes

Fix Flaky Tests

Performance Improvements



API Changes


Other features


Usability Improvements


Knows Issues


How to build MXNet

Please follow the instructions at https://mxnet.incubator.apache.org/install/index.html

List of submodules used by Apache MXNet (Incubating) and when they were updated last

Submodule:: Last updated by MXNet:: Last update in submodule

  1. cub@:: Jul 31, 2017 :: Jul 31, 2017
  2. dlpack@: Oct 30, 2017 :: Oct 30, 2017
  3. dmlc-core@: April 4, 2018 :: April 4, 2018
  4. googletest@: July 14, 2016 :: July 14, 2016
  5. mkldnn@: April 26, 2018 :: April 26, 2018
  6. mshadow@: July 9, 2018 :: July 9 2018
  7. onnx-tensorrt@: May 25, 2018 :: May 25, 2018
  8. openmp@: Nov 14, 2017 :: Nov 14, 2017
  9. ps-lite@: April 25, 2018 :: April 25, 2018
  10. tvm@: June 22, 2018 :: June 22, 2018


  • No labels