deep-floyd / IF

Other
7.64k stars 497 forks source link

Error when running through examples: "When passing variant='fp16' upgrade `transformers` to at least 4.27.0.dev0" #46

Open klei22 opened 1 year ago

klei22 commented 1 year ago

Running through one of the examples, and finding the following error related to the transformer version:

Traceback (most recent call last):
  File "test3.py", line 9, in <module>
    stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", torch_dtype=torch.float16)
  File "${HOME}/miniconda3/envs/deepfloyd/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1039, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "${HOME}/miniconda3/envs/deepfloyd/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 431, in load_sub_model
    raise ImportError(
ImportError: When passing `variant='fp16'`, please make sure to upgrade your `transformers` version to at least 4.27.0.dev0

If appears that 4.25.1 is the version installed when using the requirements.txt file and following the README instructions.

I'm currently rerunning now (after removing 4.25.1 and installing transformers 4.28.1), however would 4.28.1 be compatible or would we need to keep the library under a certain version?

Thanks! : )

Sharing the sample code I've been utilizing to test:

from diffusers import DiffusionPipeline
from diffusers.utils import pt_to_pil
import torch
from huggingface_hub import login

login()

# stage 1
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_xformers_memory_efficient_attention()  # remove line if torch.__version__ >= 2.0.0
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
    "DeepFloyd/IF-II-M-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_xformers_memory_efficient_attention()  # remove line if torch.__version__ >= 2.0.0
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {"feature_extractor": stage_1.feature_extractor, "safety_checker": stage_1.safety_checker, "watermarker": stage_1.watermarker}
stage_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16)
stage_3.enable_xformers_memory_efficient_attention()  # remove line if torch.__version__ >= 2.0.0
stage_3.enable_model_cpu_offload()
ThioJoe commented 1 year ago

Same problem

ipechman commented 1 year ago

Same

hdeping commented 1 year ago

same problem

patrickvonplaten commented 1 year ago

Please use transformers>=4.27.0 here

ipechman commented 1 year ago

Please use transformers>=4.27.0 here

I tried this but it just shoots another error with accelerator… says it needs to be above version 17. I installed version 17 and it was still broken…

kanttouchthis commented 1 year ago

the only way i have been consistently able to run the code is with pytorch 2.0.0 and the latest versions of everything

conda create -n if python=3.10 -y
conda activate if
conda install pip git -y
pip install deepfloyd_if==1.0.1
pip install xformers==0.0.19
pip install git+https://github.com/openai/CLIP.git --no-deps
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install huggingface_hub
pip install --upgrade diffusers accelerate transformers safetensors
$ pip list
accelerate                0.18.0
deepfloyd-if              1.0.1
diffusers                 0.16.1
safetensors               0.3.1
torch                     2.0.0+cu118
transformers              4.28.1
xformers                  0.0.19
jakerator commented 1 year ago

the only way i have been consistently able to run the code is with pytorch 2.0.0 and the latest versions of everything

conda create -n if python=3.10 -y
conda activate if
conda install pip git -y
pip install deepfloyd_if==1.0.1
pip install xformers==0.0.19
pip install git+https://github.com/openai/CLIP.git --no-deps
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install huggingface_hub
pip install --upgrade diffusers accelerate transformers safetensors
$ pip list
accelerate                0.18.0
deepfloyd-if              1.0.1
diffusers                 0.16.1
safetensors               0.3.1
torch                     2.0.0+cu118
transformers              4.28.1
xformers                  0.0.19

Does it work only with python 3.10?

patrickvonplaten commented 1 year ago

No you only need Python > 3.7

thusinh1969 commented 1 year ago

Grruhhhh I have to redo everything from doulb-ebuild Docker and all ffor this... Hopefully it should work !

the only way i have been consistently able to run the code is with pytorch 2.0.0 and the latest versions of everything

conda create -n if python=3.10 -y
conda activate if
conda install pip git -y
pip install deepfloyd_if==1.0.1
pip install xformers==0.0.19
pip install git+https://github.com/openai/CLIP.git --no-deps
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install huggingface_hub
pip install --upgrade diffusers accelerate transformers safetensors
$ pip list
accelerate                0.18.0
deepfloyd-if              1.0.1
diffusers                 0.16.1
safetensors               0.3.1
torch                     2.0.0+cu118
transformers              4.28.1
xformers                  0.0.19

Grruhhhh I followed the guideline and it did NOT work. I have to redo everything from doulbe-build Docker and all ffor this... I will revert soon. Hopefully it should work !

patrickvonplaten commented 1 year ago

Opened a PR here: https://github.com/deep-floyd/IF/pull/95