Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In this case, you would like to run ci/build.py --build py --platform ubuntu_build_cuda /work/runtime_functions.sh build_ubuntu_gpu_cuda8_cudnn5, which would produce an output like the following image:

...

'GPU: MKLDNN': {
  node('mxnetlinux-cpu') {
    ws('workspace/build-mkldnn-gpu') {
      init_git()
      sh "ci/build.py --build --platform ubuntu_build_cuda /work/runtime_functions.sh build_ubuntu_gpu_mkldnn"
      pack_lib('mkldnn_gpu', mx_mkldnn_lib)
    }
  }
},

...

After the binaries have been generated successfully, please take the failed command from the screenshot above and execute it in the root of your MXNet workspace. In this case, you would like to run ci/build.py --nvidiadocker --build --platform ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_python2_gpu . Please note the parameter --nvidiadocker in this example. This indicates that this test requires a GPU and is thus only executable on a Ubuntu machine with Nvidia-Docker and a GPU installed. The result of this execution should look like follows:

...

ci/build.py --nvidiadocker --build --platform ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_python2_gpu

...

ci/build.py --nvidiadocker --build --platform ubuntu_gpu --into-containeri

This is effectively done by removing everything related to the runtime function and replacing it with the command --into-containeri.