Closed poVoq closed 10 months ago
For Nvidia, it has to be supported by https://pytorch.org/, so I would expect only CUDA enabled cards will work. ROCm should also work, but is untested.
An integrated GPU is not supported. This is a limitation of pytorch.
As for VRAM requirements, the model only takes ~2-3 GB when loaded, and I would expect even a 4gb VRAM (CUDA enabled) card would suffice, perhaps somewhat slowly, but your mileage may vary. A CPU would also work as a last resort, but would take orders of magnitude longer (minutes vs seconds).
Thanks... I guess if this will work with something other than object storage in the future, I'll give it a try on a Nvidia 970m with 3GB vRAM.
I plan to add support for local storage, but I don't have an easy way to test as I only use object storage
I think you should add to the README that it does not work on integrated GPU, my guess is that most of the people run some kind of a laptop with a integrated GPU so they will install everything and then hit UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
like I did.
Could you be a bit clearer about the GPU requirements of this?
Is CUDA needed, or would OpenCL suffice?
Can it work on ROCm with AMD GPUs?
Would an intel integrated GPU maybe be sufficient?
How much vRAM would be typically required?
Thanks!