THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
- New features
- Feature improvements
- Front end API
- Language Bindings
- Performance improvements
- Example and tutorials
- Website and documentation
- CI/CD
- License
- Miscellaneous changes
- How to build MXNet
New features
Dynamic subgraph and customer operator support
...
- [Large Tensor] Add support to Random Sample & Pdf ops (#17445)
- [Large Tensor] Add LT support for NN optimizers and 1 activation function (#17444)
- [Large Tensor] Fixed SoftmaxActivation op (#17634)
- [Large Tensor] Fixed col2im op (#17622)
- [Large Tensor] Fixed Spatial Transformer op (#17617)
- [Large Tensor] Fix ravel_multi_index op (#17644)
- Sparse int64 Large tensor support (#16898)
- Re-Enabling Large Tensor Nightly on GPU (#16164)
- enabling build stage gpu_int64 to enable large tensor nightly runs (#17546)
MKL-DNN enhancement
- MKLDNN FC : Add error info when mkldnn fc bias dimension is wrong (#16692)
- [MKLDNN] support mkldnn gelu (#16710)
- [MKLDNN] Fix int8 convolution/fc bias overflow (#16734)
- [MKLDNN] use dim_t instead of int in slice/transpose operators (#16737)
- Mkldnn fullyConnect bwd bug fix (#16890)
- Revert Mkldnn fullyConnect bwd bug fix (#16890) (#16907)
- [MKLDNN] Use MKLDNNRun (#16772)
- [MKLDNN] mkldnn RNN operator enhancement (#17075)
- [MKLDNN] enable MaxPooling with full pooling convention (#16860)
- update mkldnn to v1.1.2 (#17165)
- improve mkldnn doc (#17198)
- [MKLDNN] Fix _copyto (#17173)
- [MKLDNN] Support channel wise quantization for FullyConnected (#17187)
- fixed seed for mkldnn test (#17386)
- add mkldnn softmax backward (#17170)
- cmake: copy dnnl headers to include/mkldnn (#17647)
- [mkldnn]Mkldnn bn opt backport from master to 1.7x (#18009)
- [v1.x] Update 3rdparty/mkldnn remote URL and pin to v1.3 (#17972) (#18033)
- [v1.x] backport #17900 [MKLDNN] support using any format in pooling backward (#18067)
- Static link MKL-DNN library (#16731)
- Add large tensor nightly tests for MKL-DNN operators (#16184)
- [MKL-DNN] Enable and Optimization for s8 eltwise_add (#16931)
- [MKL-DNN] Enhance Quantization Method (#17161)
- Static Build and CD for mxnet-cu102/mxnet-cu102mkl (#17074)
- MKL-DNN RNN backward path enhancement (#17183)
- cmake: check USE_OPENMP and pass proper MKL-DNN build flags (#17356)
- update mkl to 2020.0 (#17355)
- Enable MKL-DNN by default in pip packages (#16899)
- Enable MKL-DNN FullyConnected backward (#17318)
- Softmax primitive cache and in-place computation (#17152)
- boolean_mask_assign with start_axis (#16886)
- use identity_with_cast (#16913)
- change error tolerance for bf16 bn (#18110)
- [v1.x] Backport #17689 and #17884 to v1.x branch (#18064)
- refactor codes and add an option to skip/check weight's version to reduce overhead (#17707) (#18039)
- [v1.x] Backport #17702 and #17872 to v1.x branch (#18038)
...