Closed ZeroCool22 closed 2 years ago
You must select a target square before trying to generate.
You must select a target square before trying to generate.
Look the second video when i try to click the canvas i get error too...
Are you sure you are using the latest commits? This line of code should be different in the latest commits.
Are you sure you are using the latest commits? This line of code should be different in the latest commits.
Let me do a GIT PULL and i will try again..
Ok:
C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>python unstablefusion.py
'NoneType' object has no attribute 'width'
You specified use_auth_token=True, but a Hugging Face token was not found.
I need a Token to Run it locally?
Yes, you need to run a stable diffusion notebook locally once (so that your token is cached) for example this: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb (download it and run it locally using jupyter notebook). After that you will be able to use unstablefusion.
Yes, you need to run a stable diffusion notebook locally once (so that your token is cached) for example this: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb (download it and run it locally using jupyter notebook). After that you will be able to use unstablefusion.
I have never run a Jupiter Notebook before. I just download it and now it opened in Studio Code:
Now i run every cell?
Ok, and now?:
Click on the note book and run the cells one-by-one. You probably won't need to run some of them like these:
!pip install diffusers==0.3.0
!pip install transformers scipy ftfy
!pip install "ipywidgets>=7,<8"
which are just installing the packages that you already have.
Also you don't need to run the colab-specific ones like this:
from google.colab import output
output.enable_custom_widget_manager()
The main cell that you have to run is this:
from huggingface_hub import notebook_login
notebook_login()
Click on the note book and run the cells one-by-one. You probably won't need to run some of them like these:
!pip install diffusers==0.3.0 !pip install transformers scipy ftfy !pip install "ipywidgets>=7,<8"
which are just installing the packages that you already have.
Also you don't need to run the colab-specific ones like this:
from google.colab import output output.enable_custom_widget_manager()
The main cell that you have to run is this:
from huggingface_hub import notebook_login notebook_login()
Still getting the You specified use_auth_token=True, but a Hugging Face token was not found.
It may be a good idea to continue running the notebook until a stable diffusion pipeline is created.
Or you can edit the files and replace
use_auth_token=True
with
use_auth_token='<your huggingface token>'
Well, after downloading a lot of things this comes out..
Or you can edit the files and replace
use_auth_token=True
with
use_auth_token='<your huggingface token>'
In what file i modify that?
diffusionserver.py
This one?
use_auth_token=True).to("cuda")
For this?:
use_auth_token='xxxxxxxxxxxx').to("cuda")
Yes, however, I have made a version that has a field in the UI for huggingface token. I will upload it in a few minutes. You may want to wait for that.
Added in aa5ffd5d34766df4197ecc376e701ed3cd688657.
C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>python unstablefusion.py {'trained_betas'} was not found in config. Values will be initialized to default values. Torch not compiled with CUDA enabled
Well, you need pytorch with cuda enabled for stablediffusion. See the pytorch website for how to install pytorch with cuda enabled.
Yes, however, I have made a version that has a field in the UI for huggingface token. I will upload it in a few minutes. You may want to wait for that.
Yeah!
Was about to tell if there wasn't a more easy way to handle the Token in the interface...
But what about the Torch error? :(
C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>python unstablefusion.py
{'trained_betas'} was not found in config. Values will be initialized to default values.
Torch not compiled with CUDA enabled
Well, you need pytorch with cuda enabled for stablediffusion. See the pytorch website for how to install pytorch with cuda enabled.
install pytorch with cuda
This could be included in the requirements.txt?
The thing is, you don't need this if you intend to run it with google colab.
The thing is, you don't need this if you intend to run it with google colab.
Yeah, but i always prefer run it locally! So use my own GPU....
What i should put here?
You could just clone the repository from scratch (this is conflicting with the changes you have done, for example your hugginface token, in the new version we have a UI field for that so there is no need for your change anymore)
Can you help me with the Pytorch with CUDA install please?
https://pytorch.org/get-started/locally/
This will work?:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
Yes.
Yes.
Well, i don't know what more to do...
Did you install cuda itself?
Did you install cuda itself?
I used this command
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
inside:
C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion
Your cuda version is too recent! You installed pytorch with cuda 11.3 but you have cuda 11.7. (I think pytorch only supports up to cuda 11.6)
Your cuda version is too recent! You installed pytorch with cuda 11.3 but you have cuda 11.7. (I think pytorch only supports up to cuda 11.6)
How uninstall it?
Just from control panel? or must be from the console?
I don't think you need to uninstall anything. Just install the other version. (not sure though)
I don't think you need to uninstall anything. Just install the other version. (not sure though)
I tried with 3 different version, but when i run the command nvidia-smi still saying 11.7.
I used the command
: pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
But it still says 11.7.
You probably need to add the correct cuda version's bin directory to PATH
and remove the old one from the PATH
.
You probably need to add the correct cuda version's bin directory to
PATH
and remove the old one from thePATH
.
I installed the 11.3.
My PATHS are fine:
But i still get: Torch not compiled with CUDA enabled
What i do, i should try to install Tensoroflow?
I am not sure. I would say at this point your issue is a pytorch issue rather than unstablefusion issue. You may find better help in pytorch forums, I am not really an expert on this.
Finally, working...
This can be ignored?:
{'trained_betas'} was not found in config. Values will be initialized to default values.
Yes, that is not a problem.
https://user-images.githubusercontent.com/13344308/191976776-ab6a9235-a799-4ea8-9158-d7e3a55325f5.mp4
And just doing a Click on the Canvas, gives an error too and close it by itself.
https://user-images.githubusercontent.com/13344308/191977180-b545d36c-af42-446a-a1c8-6a948e1c173f.mp4