Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

There are variant RNN layers to solve different problems in NLP and Seq2Seq learning field. Vanilla RNN, GRU and LSTM are the most popular three variants in them. For a specific RNN cell, it also can be extended to the bi-directional and multi-layer scenarios. We have implemented unified user interfaces and architecture for fused RNN operator, which can be easily extended for other RNN variants. GRU and LSTM are already supported under this design for both unidirectional and bi-directional computation. Multi-layer GRU and LSTM are also provided for users who want to build deep RNN models.

 

Operator Design

 Operator Execution Flow

Below graph demonstrates the design and execution flow of the fused RNN operator in MXNet. Green blocks have already been implemented for NVidia GPU with CuDNN interfaces. Yellow blocks are recently integrated in by PR#9977 just for LSTM inference path. Blue blocks are going to be added for extending PR#9977 to training and other RNN variants. Currently, PR#10104 is submitted for fused LSTM implementation and PR#10311 for fused GRU implementation. Vanilla RNN is planned and will be provided in the future.

Image Added

Operator Registration

Currently, `sym.RNN` is registered into MXNet with legacy DMLC interfaces. We are trying to refactor this part of code with NNVM interfaces. Regarding to the design of NNVM registration, the operator creating, caching and workspace sharing should be redesigned for passing these information between forward and backward path and among iterations.

(1) Operator caching

A static thread-local hash map is defined and cached in each operator computing thread. The key of the hash map is generated according to each RNNOp parameters and inputs shape. The value of the hash map is a shared pointer of RNNOp instance. With this mechanism, we can remove the overhead of creating operator instances and share them among all iterations.

(2) Workspace sharing

As described above, reusing forward intermediate results during backward computation will reduce the computation amount and improve backward performance a lot. A reserved workspace buffer is defined as a private member of RNNOp class. This buffer will store the intermediate results during forward computation and be reused during backward computation. If the operator instance is cached and reused in other iterations, this workspace buffer will also be reused.  Workspace buffer will be released when the operator instance is destructed.

Performance

LSTM

To demonstrate the performance of LSTM layer with our fused RNN operator, we leverage those sizes and parameters for Deep Speech 2 model, in which:

seq_length = 300, batch_size = 20, input_size = 800, hidden_size = 800, single direction.

For single layer LSTM, we can get below performance and speedup on SKX8180 with 2 sockets:

samples/sec

SKX8180

P100

Speedup

LSTMCell

sym.RNN

sym.RNN/LSTMCell

LSTMCell

sym.RNN

sym.RNN(8180)/sym.RNN(P100)

sym.RNN(8180)/LSTMCell(P100)

Inference

187.09

394.73210.99%399.761049.81365837.60%98.88%

Training(fwd+bwd)

73.23

153.53209.65%118.65339.8066545.18%129.4%

For 5-layer LSTM, we can get below performance and speedup on SKX8180 with 2 sockets:

samples/sec

SKX8180

P100

Speedup

LSTMCell

sym.RNN

sym.RNN/LSTMCell

LSTMCell

sym.RNN

sym.RNN(8180)/sym.RNN(P100)

sym.RNN(8180)/LSTMCell(P100)

Inference

37.24

107.13

287.64%

86.45

329.78

32.48%

123.92%

Training(fwd+bwd)

12.93

32.29

249.66%

25.45

124.13

26.01%

126.85%

GRU 

Same with LSTM benchmark, sizes and parameters for GRU layer are also from DS2 model:

seq_length = 300, batch_size = 20, input_size = 800, hidden_size = 800, single direction 

Single layer performance on SKX8180 with 2 sockets:

 

samples=20SKX-8180P100Speedup
samples/secGRUCellsym.RNNsym.RNN/GRUCellGRUCellsym.RNNsym.RNN(8180)/sym.RNN(P100)sym.RNN(8180)/GRUCell(P100)
Inference128.21392.16306%180.18952.3841%218%
Training(fwd+bwd)80.32171.91216%126.58338.9851%137%

 For 5-layer GRU, performance on SKX8180 with 2 sockets: 

samples=20SKX-8180P100Speedup
samples/secGRUCellsym.RNNsym.RNN/GRUCellGRUCellsym.RNNsym.RNN(8180)/sym.RNN(P100)sym.RNN(8180)/GRUCell(P100)
Inference26.6788.9333%40.57357.1425%219%
Training(fwd+bwd)15.0439.2261%27.62140.8528%142%

Upstream

  • PR#10104: This PR is for fused LSTM operator which supports multi-layer and bidirectional computation too. Code is done and ready for review. When we try to refactor code, including CuDNN implementation, with NNVM interfaces, a segfault is observed in MXNet CI environment. The error cannot be reproduced on our local server. But seems it is caused by the memory sharing mechanism between forward and backward computation. So we removed NNVM interfaces from this PR and keep both CPU path and GPU path with legacy registration method.
  • PR#10311: This PR is for fused GRU operator. Multi-layer and bidirectional support is also implemented for fused GRU operator. This PR's review and merging depend on the progess of #10104.
  • TODOs: Vanilla RNN support is still WIP.

MKL-DNN Integration

Intel MKL-DNN is an open source performance library for deep learning applications. The library accelerates deep learning applications and frameworks on Intel architecture. Recently, MKL-DNN has added RNN primitives to its master branch on GitHub. The RNN primitives are still experimental and don't have good enough performance. MKL-DNN team is collecting user experience suggestions and continue improving the performance of these primitives. Currently, vanilla RNN, LSTM and GRU, as well as their bidirectional and multi-layer computation, are supported by MKL-DNN.

We will integrate MKL-DNN RNN primitives into MXNet once it becomes mature and the application programming interfaces are settled. After MKL-DNN RNN primitives are integrated into MXNet, those fused RNN operators for CPU described in above sections will still exist in MXNet as a reference implementation for CPU. If MKL-DNN is not enabled by users during MXNet compilation, RNN layers from user's model can still run into our fused RNN operator for good performance.

We will keep working on the functionality parity and consistency among our fused RNN operator, MKL-DNN primitives and CuDNN implementation.

 

Image Added