Open tashrifbillah opened 11 months ago
I'm not a developer or expert with this program. However, I'd say no. Those are just FSL programs. They use CPUs and can use multiple threads.
AFAIK, nothing in the pipeline uses GPU. The model is already built and so inference
is just a forward run through the model and uses CPU. IN fact, I know that the torch
library in the container version, was not compiled for GPUs.
Shouldn't this step use GPU?
nvidia-smi
command does not print an associated GPU process at the time of running.