Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents

Objectives

This document will talk more about ONNX format as a starting point but the design proposed should be generic enough to extend it to other formats later when needed.

ONNX is an intermediate representation for describing a neural network computation graph and weights. With AWS, Microsoft and Facebook defining and promoting ONNX, major Deep Learning frameworks such as MXNet, Pytorch, Caffe2, and CNTK are building native support for model import and export.

The purpose of this document is to This document will also define the use cases for ONNX usage in MXNet, primarily import and export and proposing a technical design that addresses these use cases. We have implemented the import functionality already . The ONNX import functionality is already implemented but the code it is in an external repository in the onnx under onnx org controlled by Facebook. The design also should be generic enough to extend it to other formats later when needed. 

Use cases:

1)    Import ONNX into MXNet symbolic interface for inference.

...

  • “serde”(serialization/deserialization - name taken from Apache Hive project) which can be called as (mx.serde.import/ mx.serde.export)
  • OR we It can also put this package go under contrib. MXNet contrib is used to add experimental features which can be later moved outside. This can qualify as an experimental feature as there is a gap in operator implementation. (See Appendix)

...

Note: Currently gluon does not provide an easy way to import a pre-trained model. (there is a workaround using which we this can do thisbe done). 

 

Export mxnet model to specified input format.

...

There are two approaches which can be taken to import/export onnx model.

1)   Through

...

MXNet's symbolic operators

Implement at the MXNet layer by parsing the ONNX model(in protobuf format) and turn into MXNet Symbolic operators and build MXNet model directly. Similarly, we can convert the MXNet model can be converted to ONNX format at this layer.

...

  • Stable APIs currently used by users.
  • More operator support available in mxnet(70%) than available in nnvm/top currently(32%). (see Appendix)

Cons:

  • In the future, we may have to reimplement at the nnvm/tvm layer, in case MXNet moves to the nnvm/tvm backend. If this happens, we will need to implement conversion for the existing symbolic operators for backward compatibility which we can leverage be leveraged for onnx-mxnet conversion as well.
  • MXNet's API contains some legacy issues which are supposedly fixed in nnvm/top operators. #issue lists down some of the issues and plan to fix them.

...

  • Does not support all operators that exist in MXNet Symbolic API or onnx. 1-1 mapping for 32% of onnx operators. (see Appendix)
  • Current Apache MXNet project does not use nnvm/tvm backend. So, users will need to install nnvm/tvm package separately for now.
  • We need to know more data around who is using nnvm/tvm backend today and for what use case. Is it reliable?

Internal API design

Implementation will go under nnvm repo.

...

As a middle ground for both of the above implementation choices, we can I propose to take the first approach and implement MXNet->ONNX conversion for export functionality and if someone wants to take advantage of NNVM/TVM optimized engine for their usage, they can do it by leveraging import functionality provided in NNVM/TVM package.

Currently Recently, NVIDIA has worked on MXNet->ONNX exporter which is in mxnet_to_onnx github repo. This implementation is also based on the first approach. We can encourage them There is already an issue created to contribute this functionality into MXNet package once we have the “serde” wrapper. Though this functionality currently does only file to file conversion (sym, params->protobuf), we it can extend it be extended further to do in memory model conversion (module->protobuf).

...

Whether we decide to take any of the above approaches, this will be an implementation detail which won't change the way these APIs will be delivered for to users.

General structure of onnx export after training would look something like this:

...