Closed jadh4v closed 7 months ago
Additional fixes for visualizing the overlay will be done in a separate PR.
This is because the cuda version on my system isn't compatible with the torch version that this requires. I can normally use older versions of pytorch just fine, and I still get torch.cuda.is_available() being true even with the environment setup here with torch 2.1.2+cu121.
I'm a bit confused about the cuda requirements issue. Isn't the virtual env created by poetry supposed to install the correct version of torch+cuda?
Are you running the server with poetry run
command?
I'm a bit confused about the cuda requirements issue. Isn't the virtual env created by poetry supposed to install the correct version of torch+cuda? Are you running the server with
poetry run
command?
It does install the correct version of torch that is meant to go with the correct version of cuda, but I think the actual version of cuda that is installed on the system is a system-level thing. The virtual env can't touch that. Upgrading cuda is similar to upgrading the video drivers on my system. if I understand correctly
Ok understood. I guess we could also later try to catch the cuda exception and switch to CPU inference during runtime.
Add a new DL model based segmentation server based on customization of Volview's Python server.
This will close #36