Open Abdull opened 1 year ago
I am using that step by changing build.py:153 to simply say "cuda" - the only other option being 'Metal' for MacOS. Perhaps you are mixing using the commands to assign tvm target etc in the colab cells - then on the Terminal CLI - there is no torch-dev-key variable defined.
EDIT: i am still unable to finish the build though..
you should choose gpu supported wheel at that page, the default link links to a cpu version of wheel
Using version https://github.com/mlc-ai/web-stable-diffusion/tree/ce0c2fbd0fffd7ee39e7be9da34052a8809d98db
environment: Ubuntu 22 LTS server without graphics card.
Executing
causes the following error:
See https://github.com/mlc-ai/web-stable-diffusion/blob/ce0c2fbd0fffd7ee39e7be9da34052a8809d98db/web_stable_diffusion/utils.py#L14 .
Is it possible to enable a GPU backend in torch even if the building system environment does not provide that GPU backend?