Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: GRU merged

...

  • PR#10104: Merged, this PR is for fused LSTM operator which supports multi-layer and bidirectional computation too. 
  • PR#10311: Merged, This PR is for fused GRU operator. Multi-layer and bidirectional support is also implemented for fused GRU operator. This PR's review and merging depend on the progess of #10104. 
  • WIPTODOs: Vanilla RNN support is still WIP.

MKL-DNN Integration

Intel MKL-DNN is an open source performance library for deep learning applications. The library accelerates deep learning applications and frameworks on Intel architecture. Recently, MKL-DNN has added RNN primitives to its master branch on GitHub. The RNN primitives are still experimental and don't have good enough performance. MKL-DNN team is collecting user experience suggestions and continue improving the performance of these primitives. Currently, vanilla RNN, LSTM and GRU, as well as their bidirectional and multi-layer computation, are supported by MKL-DNN.

...