You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

Last: 6808(?)

MXNet v0.11 Release Candidate

Major Features

  • Apple Core ML model converter

  • Support for Keras v1.2.2

API Changes

  1. Added `CachedOp`. You can now cache the operators that’s called frequently with the same set of arguments to reduce overhead.

  2. Added sample_multinomial for sampling from multinomial distributions.

  3. Added `trunc` operator for rounding towards zero.

  4. Added linalg_gemm, linalg_potrf, ... operators for lapack support.

  5. Added verbose option to Initializer for printing out initialization details.

  6. Added DeformableConvolution to contrib from the Deformable Convolutional Networks paper.

  7. Added float64 support for dot and batch_dot operator.

  8. `allow_extra` is added to Module.set_params to ignore extra parameters.

  9. Added `mod` operator for modulo.

  10. Added `multi_precision` option to SGD optimizer to improve training with float16. Resnet50 now achieves the same accuracy when trained with float16 and gives 50% speedup on Titan XP.

Performance Improvements

  1. ImageRecordIter now stores data in pinned memory to improve GPU memcopy speed.

Bugfixes

  1. Cython interface is fixed. `make cython` and `python setup.py install --with-cython` should install the cython interface and reduce overhead in applications that use imperative/bucketing.

  2. Fixed various bugs in Faster-RCNN example: https://github.com/dmlc/mxnet/pull/6486

  3. Fixed various bugs in SSD example.

  4. Fixed `out` argument not working for `zeros`, `ones`, `full`, etc.

  5. `expand_dims` now supports backward shape inference.

  6. Fixed a bug in rnn. BucketingSentenceIter that causes incorrect layout handling on multi-GPU.

  7. Fixed context mismatch when loading optimizer states.

  8. Fixed a bug in ReLU activation when using MKL.

  9. Fixed a few race conditions that causes crashes on shutdown.

Refactors

  1. Refactored TShape/TBlob to use int64 dimensions and DLTensor as internal storage. Getting ready for migration to DLPack. As a result TBlob::dev_mask_ and TBlob::stride_ are removed.

 

Keras 1.2.2 with MXNet Backend  

Highlights

  1. Adding Apache MXNet backend for Keras 1.2.2.
  2. Easy to use multi-gputraining with MXNet backend.
  3. High-performance model training in Keras with MXNet backend.

Getting Started Resources

  1. Installation - https://github.com/dmlc/keras/wiki/Installation
  2. How to use Multi-GPU for training in Keras with MXNet backend - https://github.com/dmlc/keras/wiki/Using-Keras-with-MXNet-in-Multi-GPU-mode
  3. For more examples explore keras/examples directory.
  4. Source Repo - https://github.com/dmlc/keras

For more details on unsupported functionalities, known issues and resources refer to release notes - https://github.com/dmlc/keras/releases

 

Apple CoreML Converter

You can now convert your MXNet models into Apple CoreML format so that they can be run on Apple devices which means that you can build your next iPhone app using your own MXNet model!

This tool currently supports conversion of models that are similar to:

  • Inception
  • Network-In-Network
  • Squeezenet
  • Resnet
  • Vgg

List of layers that can be currently converted: Activation, Batchnorm, Concat, Convolution, Deconvolution, Dense, Elementwise, Flatten, Pooling, Reshape, Softmax, Transpose.

For more details on how to convert the models, refer to https://github.com/apache/incubator-mxnet/tree/master/tools/coreml

Requires:

  • MacOS - High Sierra 10.13
  • Xcode 9
  • Mxnet 0.10.0 or greater
  • Python 2.7
  • Coremltools 0.5.0 or greater
  • Yaml (Pyyaml)
  • No labels