Sets up RoboBuggy containers with GPU access, and installs PyTorch as part of the build process.
Requirements
Prod environments must have CUDA 12.0 (other versions are untested) and nvidia container toolkit (installed on Short Circuit).
Dev environment have no requirements. When running setup_dev rather than setup_prod, GPUs will not be accessible, even if one is present.
Potential problems
Right now, it is assumed that prod=gpu and dev=no gpu. This may be a problem with NAND if it does not run a GPU. This can be resolved by creating a new setup that uses docker-compose-no-gpu.yml, or a flag on setup_prod.
GPU testing with setup_dev is disabled right now.
Potential Improvements
Turning the setup_ scripts into a custom build script or make targets could help with the above. Especially since most build steps are very similar, and gpu/no gpu and dev/prod each make independent changes to the setup flow.
Sets up RoboBuggy containers with GPU access, and installs PyTorch as part of the build process.
Requirements
Prod environments must have CUDA 12.0 (other versions are untested) and nvidia container toolkit (installed on Short Circuit).
Dev environment have no requirements. When running
setup_dev
rather thansetup_prod
, GPUs will not be accessible, even if one is present.Potential problems
Right now, it is assumed that
prod=gpu
anddev=no gpu
. This may be a problem with NAND if it does not run a GPU. This can be resolved by creating a new setup that usesdocker-compose-no-gpu.yml
, or a flag onsetup_prod
.GPU testing with
setup_dev
is disabled right now.Potential Improvements
Turning the
setup_
scripts into a custom build script ormake
targets could help with the above. Especially since most build steps are very similar, and gpu/no gpu and dev/prod each make independent changes to the setup flow.