Closed ms1design closed 3 months ago
Thanks @ms1design, did not realize you could combine multiple build ARG
into one layer like you can ENV
, that should help reduce the layer count! I made a note to try these changes after GTC 👍 I like how you changed the URL's to the NVIDIA.com ones.
Do you see a reason not to have ENV CUDAARCHS=${CUDA_ARCH_LIST} \ CUDA_ARCHITECTURES=${CUDA_ARCH_LIST}
included? That sets the cmake nvcc GPU architectures (i.e. sm87) in all the downstream containers
Do you see a reason not to have ENV CUDAARCHS=${CUDA_ARCH_LIST} \ CUDA_ARCHITECTURES=${CUDA_ARCH_LIST} included? That sets the cmake nvcc GPU architectures (i.e. sm87) in all the downstream containers
Good catch @dusty-nv - missing CUDAARCHS
/ CUDA_ARCHITECTURES
environmental variables are restored.
Thanks @ms1design, did not realize you could combine multiple build ARG into one layer like you can ENV, that should help reduce the layer count!
Yes, you can indeed merge multiple ARG
commands into one by using backslashes (\
) to continue the command across multiple lines. Basically just don't try that with COPY
anything else we can merge to one docker command...
I like how you changed the URL's to the NVIDIA.com ones.
Thanks @dusty-nv! I like it too - now it's more robust!
Hi @dusty-nv! 👋
It's a cherry-picked PR from #414 to introduce improvements only for
cuda
container:DISTRO
build time environment variable to have a proper pin version based on L4T version