Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

 

Keras 1.2.2 with MXNet Backend  

Highlights

  1. Adding Apache MXNet backend for Keras 1.2.2.
  2. Easy to use multi-gputraining with MXNet backend.
  3. High-performance model training in Keras with MXNet backend.

Getting Started Resources

  1. Installation - https://github.com/dmlc/keras/wiki/Installation
  2. How to use Multi-GPU for training in Keras with MXNet backend - https://github.com/dmlc/keras/wiki/Using-Keras-with-MXNet-in-Multi-GPU-mode
  3. For more examples explore keras/examples directory.
  4. Source Repo - https://github.com/dmlc/keras

For more details on unsupported functionalities, known issues and resources refer to release notes - https://github.com/dmlc/keras/releases

 

Apple CoreML Converter

You can now convert your MXNet models into Apple CoreML format so that they can run on Apple devices which means that you can build your next iPhone app using your own MXNet model!

This tool currently supports conversion of models that are similar to:

  1. Inception
  2. Network-In-Network
  3. Squeezenet
  4. Resnet
  5. Vgg

List of layers that can be converted:

  1. Activation
  2. Batchnorm
  3. Concat
  4. Convolution
  5. Deconvolution
  6. Dense
  7. Elementwise
  8. Flatten
  9. Pooling
  10. Reshape
  11. Softmax
  12. Transpose

Requires: 
In order to run the converter you need the following:
  1. MacOS - High Sierra 10.13
  2. Xcode 9
  3. Coremltools 0.5.0 or greater (pip install coremltools)
  4. Mxnet 0.10.0 or greater (Installation Instructions)
  5. Yaml (pip install pyyaml)
  6. Python 2.7

Example:

In order to convert, say a squeezenet model (which can be downloaded from here), you can execute the following command: (assuming you are in the directory where mxnet_coreml_converter.py resides):

python mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,227,227"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="squeezenetv11.mlmodel"

In the command above:

model-prefix: refers to the MXNet model prefix (may include the directory).

epoch: refers to the suffix of the MXNet model file.

input-shape: refers to the input shape information in a JSON string format where the key is the name of the input variable (="data") and the value is the shape of that variable.

mode: refers to the coreml model mode. Can either be 'classifier', 'regressor' or None. In this case, we use 'classifier' since we want the resulting CoreML model to classify images into various categories.

pre-processing-arguments: In the Apple world images have to be of type Image. By providing image_input_names as "data", we are saying that the input variable "data" is of type Image.

class-labels: refers to the name of the file which contains the classification labels (a.k.a. synset file).

output-file: the file where the CoreML model will be dumped.

More

Details

Information:

You can find more detailed explanations as well as more examples of the converter here.

In order to use the generated CoreML model file into your project, refer to Apple's tutorial here.