THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
Issue | PRs | Contributor(s) | Notes |
---|---|---|---|
FP32 optimization | @ciyongch, @TaoLv, @juliusshufa, @yinghu5, @TaoLv | last PR left under review: #14893 | |
dependency upgrade for mxnet | @stu1130 | update to latest cuda 10.1 and cudnn 7.5.1 for mxnet and mxnet CI | |
Conversion from FP32 to Mixed Precision Models | https://github.com/apache/incubator-mxnet/issues/14584 | @anirudh2290 | Depend on AMP PR |
MKLDNN RNN Inference Integration(fp32 LSTM and vRNN with tanh and relu) | #14713 | @lihaofd, Tao Lv, @pengzhao-intel | Improves performance of certain operators used in RNN models |
| haibin | resolved at gluon nlp side by https://github.com/dmlc/gluon-nlp/pull/710 | |
https://github.com/apache/incubator-mxnet/issues/15028 | https://github.com/apache/incubator-mxnet/pull/15039 | amp tutorial test failed. blocking nightly test | |
https://github.com/apache/incubator-mxnet/issues/15029 | NA | @zheng-da @apeforest | wating for fix |
0 size tensor patch for quantization | https://github.com/apache/incubator-mxnet/pull/15031 | @ciyongch | under review |
https://github.com/apache/incubator-mxnet/issues/15034 | NA | @DickJC123 @lihaofd | brough by previous change on RNN (
|
Release Timeline
Following timeline is based on everything goes well.(Added some buffer time)
...