Closed antonyscerri closed 2 years ago
Ok so i spotted the example command to reduce the voxel dimensions (had been looking everywhere but the readme) for working with less GPU memory. That seems to have worked, i've yet to review the output.
Now to work out what the best voxel dimensions are on a 16GB GPU.
Hi
The paper mentions the type of GPU used but not the amount of memory per device. I was wondering if you could provide some additional information, mainly for inference at this point. Trying to run this on a 16GB with the sample dataset is causing an "CUDA out of memory" error when trying to run the second step in the inference (inference2 in model.py) this seems to be whilst calling the backbone3d.
I'm assuming of course there isnt a library compatibility issue and i have the right weights downloaded etc for this to work correctly. It sounds like others have been able to run the inference so im curious whether that was down to hardware or not.
Thanks
Tony