microsoft / onnxruntime-extensions

onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime
MIT License
323 stars 84 forks source link

Add Linux GPU Stage #610

Closed mszhanyi closed 10 months ago

mszhanyi commented 10 months ago

Description:

  1. Run build and native tests in container.
  2. Extensions is built with CUDA11.8, sm=86 and Onnxruntime 1.16.2
  3. The pipeline is running in one A10 pool.
  4. Why use ubuntu as the base image rather than the redhat UBI Because I find that only GCC8.5 or GCC>=12 can be installed in RedHat UBI container. std::filesystem isn't supported in GCC < 9 and GCC 12 is not supported by CUDA 11.8 (https://gist.github.com/ax3l/9489132#nvcc)

How to try it locally

  1. python3 tools/utils/get_docker_image.py --dockerfile tools/ci_build/github/linux/docker/Dockerfile.ubuntu_cuda11_8_tensorrt8_6 --context tools/ci_build/github/linux/docker/ --docker-build-args "--build-arg BUILD_UID=$( id -u )" --repository onnxruntime-extensionscuda11build
  2. docker run --gpus all --rm --volume /home/azureuser/onnxruntime-extensions:/onnxruntime-extensions --volume /home/azureuser/onnxruntime-linux-x64-gpu-1.16.2:/onnxruntime -e CUDA_PATH=/usr/local/cuda-11.8 onnxruntime-extensionscuda11build /bin/bash -c " set -ex; \ pushd /onnxruntime-extensions; \ sh ./build.sh -DOCOS_ENABLE_CTEST=ON -DOCOS_USE_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=86 -DOCOS_ONNXRUNTIME_VERSION="$(ORT_VERSION)" -DONNXRUNTIME_PKG_DIR=/onnxruntime; \ cd out/Linux/RelWithDebInfo; ctest -C RelWithDebInfo --output-on-failure; \ popd; \ "