You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Currently, MXNet operators and NDArray only support tensor size less than 2147483648 (2^32). This is due to the data type for array element indexing as well as value storage are using unint32_t by default in the MXNet backend.

To support large tensors, we need a systematic change across the entire MXNet backend and python front end. Specifically, the following tasks are needed at minimum:

  1. Choose a data type that scales beyond 2^32 to index elements in the array
  2. Also update data value type using the new data type
  3. It is desired the data type is not fixed and can be adjusted cross different platforms
  4. Run performance tests on various platforms to make sure no significant runtime and/or memory degradation
  5. Document the change.

An JIRA epic is created to track this project:  Unable to render Jira issues macro, execution error.

Proposed Approach

Use index_t for indexing elements in the tensor.

Use size_t for returning the size of object or total number of elements.

auto is a keyword in C++11 which derives the data type based on the rhs.



 

  • No labels