Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Support commonly used operators in MXNet
  2. Investigate ways to evaluate and mitigate performance impact
  3. Add Benchmark tests for large tensor operators
  4. Write a blog post about memory allocation research

Operators

...

to be supported

This project is to enable MXNet to support large tensors. It should also provide guideline for future developers of the correct data type to choose when defining an integer variables in the MXNet backend. We should also provide a performance benchmark at operator level as well as model level between 64-bit integer and 32-bit integers. Moreover, we need to provide a mechanism to prevent future PRs breaking this support.

The following spreadsheet keeps track of the operators that need support large array operations:

OperatorsDoneTestComments
onesYtest_large_array.py
zerosYtest_large_array.py
emptyYtest_large_array.py
dotYtest_large_array.py
uniformYtest_large_array.py
broadcast_toYtest_large_array.py
clipYtest_large_array.py
takeYtest_large_array.py
sliceYtest_large_array.py
squeezeYtest_large_array.py
broadcast_divYtest_large_array.py
pickYtest_large_array.py
depth_to_spacePR under Reviewtest_large_array.pyPR: https://github.com/apache/incubator-mxnet/pull/14797
space_to_depthPR under Reviewtest_large_array.pyPR: https://github.com/apache/incubator-mxnet/pull/14797
diagN

padN

softmaxN

ravel_multi_indexN

unravel_indexN

topkPR under Review
PR:


https://github.com/zheng-da/incubator-mxnet/commit/bef7dffa8c90cb68a8f04aa8e88faf380c3fad2ba list of operators currently not yet supported

Open Questions

  • How to verify all operators support large tensors
  • Impact to GPU
  • MKLDNN support
  • CuDNN support

...