Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

You can then run the LeNet-5 unit test, which will train LeNet-5 on MNIST using the symbolic API. The test will then run inference in MXNet both with, and without MXNet-TensorRT runtime integration. Finally, the test will display a comparison of both runtime's accuracy scores. The test can be run as follows:

Code Block
python ${MXNET_HOME}/tests/python/tensorrt/test_tensorrt_lenet5.py

...


You should get a result similar to the following:

...

The script can be run simply as:

Code Block
python ${MXNET_HOME}/tests/python/tensorrt/test_tensorrt_resnet_resnext.py

...


Here's some sample output, for inference with batch size 16 (TensorRT is especially useful for small batches for low-latency production inference):

...


Simply switching the flag allows us to go back and forth between MXNet and MXNet-TensorRT inference. See the details in the unit test at at `${MXNET_HOME}/tests/python/tensorrt/test_tensorrt_lenet5.py.`

Running TensorRT with your own models with the Gluon API

...