The main points of the proposed improvements are as follows
Add build instructions for CPU version of Docker container and CUDA+TensorRT version of Docker container to README
Build the container as a general user, not as a privileged user
Include TensorRT Execution Provider in onnxruntime-gpu (It will be possible to run TensorRT via onnx)
Fix all versions of a minimum set of required packages to a specific version
Environment
CUDA 12.3
TensorRT 8.6.3.1-1+cuda12.0
PyTorch v2.1.0
ONNX 1.16.1
onnxruntime v1.18.0
pycuda 2022.2.2
Type of change
Please delete options that are not relevant.
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
Checklist:
[x] The code follows the Python style guide.
[x] Code and files are well organized.
[x] All tests pass.
[x] New code is covered by tests.
[ ] [Optional] We would be very happy if gitmoji :technologist: could be used to assist the commit message :speech_balloon:!
Additional Information
Originally, I would like to push the successfully built Docker images to Docker Hub so that all users can just docker pull to finish building their environment without having to run docker build, but since I am outside your organization, I have given up.
Description
Type of change
Please delete options that are not relevant.
Checklist:
Additional Information
Originally, I would like to push the successfully built Docker images to Docker Hub so that all users can just docker pull to finish building their environment without having to run docker build, but since I am outside your organization, I have given up.