...
- Support commonly used operators in MXNet
- Investigate ways to evaluate and mitigate performance impact
- Add Benchmark tests for large tensor operators
- Write a blog post about memory allocation research
Operators
...
to be supported
This project is to enable MXNet to support large tensors. It should also provide guideline for future developers of the correct data type to choose when defining an integer variables in the MXNet backend. We should also provide a performance benchmark at operator level as well as model level between 64-bit integer and 32-bit integers. Moreover, we need to provide a mechanism to prevent future PRs breaking this support.
The following spreadsheet keeps track of the operators that need support large array operations:
Operators | Done | Test | Comments |
ones | Y | test_large_array.py | |
zeros | Y | test_large_array.py | |
empty | Y | test_large_array.py | |
dot | Y | test_large_array.py | |
uniform | Y | test_large_array.py | |
broadcast_to | Y | test_large_array.py | |
clip | Y | test_large_array.py | |
take | Y | test_large_array.py | |
slice | Y | test_large_array.py | |
squeeze | Y | test_large_array.py | |
broadcast_div | Y | test_large_array.py | |
pick | Y | test_large_array.py | |
depth_to_space | PR under Review | test_large_array.py | PR: https://github.com/apache/incubator-mxnet/pull/14797 |
space_to_depth | PR under Review | test_large_array.py | PR: https://github.com/apache/incubator-mxnet/pull/14797 |
diag | N | ||
pad | N | ||
softmax | N | ||
ravel_multi_index | N | ||
unravel_index | N | ||
topk | PR under Review | PR: | |
https://github.com/zheng-da/incubator-mxnet/commit/bef7dffa8c90cb68a8f04aa8e88faf380c3fad2b | a list of operators currently not yet supported |
Open Questions
- How to verify all operators support large tensors
- Impact to GPU
- MKLDNN support
- CuDNN support
...