Closed Erotemic closed 8 months ago
I think it would be better to change the providers to ["CUDAExecutionProvider", "CPUExecutionProvider"]
, so we could use cuda first, if failed, then CPU. This also fixes #1317.
I try it but failed. Maybe we still need to check if CUDA is available, and then update the provider list.
Adding the CUDA provider seemed to work fine for me. What's the best way to check for cuda within the dependencies of this repo? Normally I would just use torch.cuda.is_available()
.
Thanks for the report. Fixed via https://github.com/labelmeai/labelme/pull/1364
Fixes #1334
The latest version of
onnxruntime.InferenceSession
seems to requireproviders
are specified. This is a small patch that adds them in. I'm not sure if the values I specified were correct or optimal, but it allows SAM to work on my machine rather than crash.