Describe what should be investigated or refactored
There are mentions of GPU_LIMITS, GPU_REQUEST, and GPU_ENABLED across the repository. These should be cleaned up and unified under a single variable across the backends. My recommendation is to follow the whisper pattern for all backends.
Ensure all backends have a modifiable GPU_REQUEST Zarf variable for the delivery engineer.
Describe what should be investigated or refactored
There are mentions of
GPU_LIMITS
,GPU_REQUEST
, andGPU_ENABLED
across the repository. These should be cleaned up and unified under a single variable across the backends. My recommendation is to follow the whisper pattern for all backends.Ensure all backends have a modifiable
GPU_REQUEST
Zarf variable for the delivery engineer.Links to any relevant code
https://github.com/search?q=repo%3Adefenseunicorns%2Fleapfrogai+GPU_&type=code