Open hithereai opened 1 year ago
Hi, i'm on Windows 10 and i had some trouble installing as well. This worked for me:
conda create -n if python=3.10 -y
conda activate if
conda install pip git -y
pip install deepfloyd_if==1.0.1
pip install xformers==0.0.19
pip install git+https://github.com/openai/CLIP.git --no-deps
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install huggingface_hub
pip install --upgrade diffusers accelerate transformers safetensors
this is rather inefficient as it installs and then uninstalls a bunch of modules but it worked for me. make sure to disable the xformers memory efficient attention as this does install pytorch 2.0.0.
@kanttouchthis Downloading models, will message if successful!
@C0nsumption they are being downloaded automatically using the code, no need to manually download them from HF.
@kanttouchthis it worked :)
@hithereai AssertionError: Torch not compiled with CUDA enabled
:(
@hithereai AssertionError: Torch not compiled with CUDA enabled
:(
looks like you installed torch for cpu somehow try running this step again
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
and then check your torch version after installing (pip list
).
it should be 2.0.0+cu118
and not 2.0.0
Got it workin, friggin love y'all!!!!
Hi, i'm on Windows 10 and i had some trouble installing as well. This worked for me:
conda create -n if python=3.10 -y conda activate if conda install pip git -y pip install deepfloyd_if==1.0.1 pip install xformers==0.0.19 pip install git+https://github.com/openai/CLIP.git --no-deps pip uninstall torch -y pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 pip install huggingface_hub pip install --upgrade diffusers accelerate transformers safetensors
this is rather inefficient as it installs and then uninstalls a bunch of modules but it worked for me. make sure to disable the xformers memory efficient attention as this does install pytorch 2.0.0.
This worked for me too. But it did give me an error about needing protobuf. And it specifically wanted protobuf 3.20 or less. After that it worked.
I tried on Windows as well with the above instructions. I got the following error messages from pip:
deepfloyd-if 1.0.1 requires accelerate~=0.15.0, but you have accelerate 0.18.0 which is incompatible.
deepfloyd-if 1.0.1 requires torch<2.0.0, but you have torch 2.0.0+cu118 which is incompatible.
deepfloyd-if 1.0.1 requires transformers~=4.25.1, but you have transformers 4.28.1 which is incompatible.
When I tried to run the sample code, the model download will start but when it tries to load the model there is this error:
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'stabilityai\stable-diffusion-x4-upscaler'.
It seems there is some file format issue on Windows. Are there any alternate ways to run this on Windows?
When I tried to run the sample code, the model download will start but when it tries to load the model there is this error:
raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'stabilityai\stable-diffusion-x4-upscaler'.
It seems there is some file format issue on Windows. Are there any alternate ways to run this on Windows?
check this issue for a potential fix to the HFValidationError. the pip warnings can probably be ignored, assuming you installed everything in the right order
I've followed the above steps, and this code:
from deepfloyd_if.modules import IFStageI, IFStageII, StableStageIII
from deepfloyd_if.modules.t5 import T5Embedder
device = 'cuda:0'
if_I = IFStageI('IF-I-XL-v1.0', device=device)
if_II = IFStageII('IF-II-L-v1.0', device=device)
if_III = StableStageIII('stable-diffusion-x4-upscaler', device=device)
t5 = T5Embedder(device="cpu")
from deepfloyd_if.pipelines import dream
prompt = 'ultra close-up color photo portrait of rainbow owl with deer horns in the woods'
count = 4
result = dream(
t5=t5, if_I=if_I, if_II=if_II, if_III=if_III,
prompt=[prompt]*count,
seed=42,
if_I_kwargs={
"guidance_scale": 7.0,
"sample_timestep_respacing": "smart100",
},
if_II_kwargs={
"guidance_scale": 4.0,
"sample_timestep_respacing": "smart50",
},
if_III_kwargs={
"guidance_scale": 9.0,
"noise_level": 20,
"sample_timestep_respacing": "75",
},
)
if_III.show(result['III'], size=14)
Seems to be hanging. with this line:
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:10<00:00, 5.16s/it]
and it just sits.
I was able to run the example before that. ("..kangaroo wearing an orange hoodie..."), and it runs fine, in maybe 20s or so?
My install process is this, from above:
conda create -n if python=3.10 -y
conda activate if
conda install pip git -y
pip install deepfloyd_if==1.0.1
pip install xformers==0.0.19
pip install git+https://github.com/openai/CLIP.git --no-deps
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install huggingface_hub
pip install --upgrade diffusers accelerate transformers safetensors
pip install protobuf==3.20
I'm on Windows 11, python 3.10, and I've already modified line 23 of stage_III_sd_x4.py to be model_id = 'stabilityai/' + self.dir_or_name
I'm on a 4090 with 24GB of VRAM, and so far it looks like it's topped out at 16GB used.
Tried to follow the instructions, yielded in a total disaster. Each pip pack wants to install its own torch version, and I couldn't get anything to work. Followed the instructions 1:1 multiple times in a few diff fresh envs, to no avail.
Also tried with a fresh new PT2 venv, also to no avail.
Could you please re-test your instructions, on windows preferably? I have an RTX 4090 with 24gb of vram, and I couldn't even get to the loading into vram part.