Hey, first of all thanks for this node. It improved my results noticeable.
On my system the YOLO model runs on CPU by default even if there is a cuda backend available (via rocm).
If possible, please add a `.to'' call after loading the model. This fixes the issue with no downsides as far as i see.
In Models.yolo:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
cls._yolo = cls._yolo.to(device)
This might be a theoretical issue for other torch backends but from what i see yolo defaults to CPU for these too, so at least this shouldn't make it worse than it was before.
Hey, first of all thanks for this node. It improved my results noticeable.
On my system the YOLO model runs on CPU by default even if there is a cuda backend available (via rocm). If possible, please add a `.to'' call after loading the model. This fixes the issue with no downsides as far as i see.
In Models.yolo:
This might be a theoretical issue for other torch backends but from what i see yolo defaults to CPU for these too, so at least this shouldn't make it worse than it was before.