deep-floyd / IF

Other
7.64k stars 497 forks source link

Offload_folder is ignored? #88

Open spacewalkingninja opened 1 year ago

spacewalkingninja commented 1 year ago

│ /usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py:872 in │ │ load_checkpoint_in_model │ │ │ │ 869 │ """ │ │ 870 │ tied_params = find_tied_parameters(model) │ │ 871 │ if offload_folder is None and device_map is not None and "disk" in device_map.values │ │ ❱ 872 │ │ raise ValueError( │ │ 873 │ │ │ "At least one of the model submodule will be offloaded to disk, please pass │ │ 874 │ │ ) │ │ 875 │ elif offload_folder is not None and device_map is not None and "disk" in device_map. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: At least one of the model submodule will be offloaded to disk, please pass along an offload_folder.

When running on colab, modified demo code

I actually have been playing with both XL and M models to see speed vs quality differences with the models.

So I now loaded XL model again during the same session. I have been flush()ing and del ing the pipes and everything. Anyway, line giving me errors is:

text_encoder = T5EncoderModel.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" )

pipe = IFImg2ImgPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=text_encoder, unet=None, device_map="auto" ) prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

free some memory

del pipe del text_encoder

for image in images: flush() pipe = IFImg2ImgPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto", offload_folder = '/content/offload' #THIS IS APPARENTLY IGNORED? SHOULD IT BE IGNORED? )