Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: add perf data

...

Below graph demonstrates the design and execution flow of the fused RNN operator in MXNet. Green blocks have already been implemented for NVidia GPU with CuDNN interfaces. Yellow blocks are recently integrated in by PR#9977 just for LSTM inference path. Blue blocks are going to be added for extending PR#9977 to training and other RNN variants. Currently, PR#10104 is submitted for fused LSTM implementation and PR#10311 for fused GRU implementation. Vanilla RNN is planned and will be provided in the future.Image Removed

Image Added

Operator Registration

Currently, `sym.RNN` is registered into MXNet with legacy DMLC interfaces. We are trying to refactor this part of code with NNVM interfaces. Regarding to the design of NNVM registration, the operator creating, caching and workspace sharing should be redesigned for passing these information between forward and backward path and among iterations.

...

As described above, reusing forward intermediate results during backward computation will reduce the computation amount and improve backward performance a lot. A reserved workspace buffer is defined as a private member of RNNOp class. This buffer will store the intermediate results during forward computation and be reused during backward computation. If the operator instance is cached and reused in other iterations, this workspace buffer will also be reused.  Workspace buffer will be released when the operator instance is destructed.

Performance

LSTM

To demonstrate the performance of LSTM layer with our fused RNN operator, we leverage those sizes and parameters for Deep Speech 2 model, in which:

seq_length = 300, batch_size = 20, input_size = 800, hidden_size = 800, single direction.

...


Single layer and single direction test as below:

99%
T, N, I, H = 300, 20, 800, 800 layer=1  bidirection=False 
samples/sec
SKX8180

P100

Speedup

LSTMCell

sym.RNN

sym.RNN/LSTMCell

LSTMCell

sym.RNN

sym.RNN(8180)/sym.RNN(P100)

sym.RNN(8180)/LSTMCell(P100)

Non-FusedRNNFusedRNNFusedRNN/Non-FusedRNN
LSTM-Inference187.09394.73210.399.761049.81365837.60%98.88%98%
LSTM-Training(fwd+bwd)73.23153.53209.65%118.65339.8066545.18%129.4%

For 5-layer LSTM, we can get below performance and speedup on SKX8180 with 2 sockets:

GRU-Inference128.21392.16305.87%
GRU-Training(fwd+bwd)80.32171.91214.03%
vRNN(Relu)-Inference518.131538.46296.92%
vRNN(Relu)-Training(fwd+bwd)202.02357.14176.79%
vRNN(Tanh)-Inference492.61952.38193.33%
vRNN(Tanh)-

samples/sec

SKX8180

P100

Speedup

LSTMCell

sym.RNN

sym.RNN/LSTMCell

LSTMCell

sym.RNN

sym.RNN(8180)/sym.RNN(P100)

sym.RNN(8180)/LSTMCell(P100)

Inference

37.24

107.13

287.64%

86.45

329.78

32.48%

123.92%

Training(fwd+bwd)12198.930232318.2998249161.66%

25.45

124.13

26.01%

126.85%

GRU 

Same with LSTM benchmark, sizes and parameters for GRU layer are also from DS2 model:

seq_length = 300, batch_size = 20, input_size = 800, hidden_size = 800, single direction 

Single layer performance on SKX8180 with 2 sockets:

 

08%


5-layers (vRNN/LSTM/GRU) with bi-direction test as below:

T, N, I, H = 300, 20, 800, 800 layer=1  bidirection=False 
samples/sec
SKX8180Speedup
Non-FusedRNNFusedRNNFusedRNN/Non-FusedRNN
LSTM-Inference37.24107.13287.67%
LSTM-Training(fwd+bwd)12.9332.29249.73%
GRU-Inference26.6788.9333.33%
GRU-
samples=20SKX-8180P100Speedup
samples/secGRUCellsym.RNNsym.RNN/GRUCellGRUCellsym.RNNsym.RNN(8180)/sym.RNN(P100)sym.RNN(8180)/GRUCell(P100)
Inference128.21392.16306%180.18952.3841%218%
Training(fwd+bwd)8015.320417139.912216%126260.58338.9851%137%

 For 5-layer GRU, performance on SKX8180 with 2 sockets: 

64%
vRNN(Relu)-Inference40.73134.23329.53%
vRNN(Relu)-Training(fwd+bwd)22.6035.97159.17%
vRNN(Tanh)-Inference38.91104.17267.71%
vRNN(Tanh)-
samples=20SKX-8180P100Speedup
samples/secGRUCellsym.RNNsym.RNN/GRUCellGRUCellsym.RNNsym.RNN(8180)/sym.RNN(P100)sym.RNN(8180)/GRUCell(P100)
Inference26.6788.9333%40.57357.1425%219%
Training(fwd+bwd)1522.04733934.201261%27.62140.8528%142%

Upstream

149.66%


Upstream

  • LSTM, PR#10104, GRU PR#10311, vRNN PR#11399 : Merged
  • PR#10104: This PR is for fused LSTM operator which supports multi-layer and bidirectional computation too. Code is done and ready for review. When we try to refactor code, including CuDNN implementation, with NNVM interfaces, a segfault is observed in MXNet CI environment. The error cannot be reproduced on our local server. But seems it is caused by the memory sharing mechanism between forward and backward computation. So we removed NNVM interfaces from this PR and keep both CPU path and GPU path with legacy registration method.
  • PR#10311: This PR is for fused GRU operator. Multi-layer and bidirectional support is also implemented for fused GRU operator. This PR's review and merging depend on the progess of #10104.
  • TODOs: Vanilla RNN support is still WIP.

MKL-DNN Integration

Intel MKL-DNN is an open source performance library for deep learning applications. The library accelerates deep learning applications and frameworks on Intel architecture. Recently, MKL-DNN has added RNN primitives to its master branch on GitHub. The RNN primitives are still experimental and don't have good enough performance. MKL-DNN team is collecting user experience suggestions and continue improving the performance of these primitives. Currently, vanilla RNN, LSTM and GRU, as well as their bidirectional and multi-layer computation, are supported by MKL-DNN.

...