1080ti not detected, while working with stable-diffusion and GPTchat
First encountered No module named 'encoded'
Fixed with running: python -m pip install . (yes, include the .)
Second "No GPU being used. Careful, Inference might be extremely slow!" message
type python, hit enter
then type import torch and hit enter
now type torch.cuda.is_available() and see if it says true or false
if false go to pytorch and follow the steps for manual reinstall
if still not working while rest is working (like in my case)
download anaconda or miniconda, create a clean environment and start over
this did the trick finally for me, while I still had to take again the steps above.
1080ti not detected, while working with stable-diffusion and GPTchat
First encountered No module named 'encoded' Fixed with running: python -m pip install . (yes, include the .)
Second "No GPU being used. Careful, Inference might be extremely slow!" message
type python, hit enter then type import torch and hit enter now type torch.cuda.is_available() and see if it says true or false
if false go to pytorch and follow the steps for manual reinstall
if still not working while rest is working (like in my case) download anaconda or miniconda, create a clean environment and start over this did the trick finally for me, while I still had to take again the steps above.