Stability-AI / stablediffusion

High-Resolution Image Synthesis with Latent Diffusion Models
MIT License
38.4k stars 4.96k forks source link

RuntimeError: Pretrained weights (laion2b_s32b_b79k) not found for model ViT-H-14. #233

Open yangzhipeng1108 opened 1 year ago

yangzhipeng1108 commented 1 year ago

Traceback (most recent call last): File "scripts/txt2img.py", line 388, in main(opt) File "scripts/txt2img.py", line 219, in main model = load_model_from_config(config, f"{opt.ckpt}", device) File "scripts/txt2img.py", line 34, in load_model_from_config model = instantiate_from_config(config.model) File "/root/nas-share/ds/stablediffusion-main/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "/root/nas-share/ds/stablediffusion-main/ldm/models/diffusion/ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "/root/nas-share/ds/stablediffusion-main/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "/root/nas-share/ds/stablediffusion-main/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "/root/nas-share/ds/stablediffusion-main/ldm/modules/encoders/modules.py", line 190, in init model, , = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) File "/usr/local/lib/python3.8/dist-packages/open_clip/factory.py", line 133, in create_model_and_transforms model = create_model( File "/usr/local/lib/python3.8/dist-packages/open_clip/factory.py", line 111, in create_model raise RuntimeError(f'Pretrained weights ({pretrained}) not found for model {model_name}.') RuntimeError: Pretrained weights (laion2b_s32b_b79k) not found for model ViT-H-14.

lucasjinreal commented 1 year ago

Same issue

Hongjiew commented 1 year ago

Same issue

Hongjiew commented 1 year ago

I solved this by upgrading open-clip: pip install -U open_clip_torch

misi0202 commented 1 year ago

my environment doest have internet,so i download the model by zip i change the code in txt2img.py,this is useful

cachedir="/root/.cache/huggingface/hub/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/snapshots/94a64189c3535c1cb44acfcccd7b0908c1c8eb23" model, , _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'),cache_dir=cache_dir) hope that is useful for you

jiawensong commented 1 year ago

I have met the same problem, my environment also cant connect to huggingface,. I have downloaded open_clip_pytorch_model.bin and scp to this machine, and set up its path then use cache_dir like recommended upon, but it doesnt work. Can anyone run success give more help?

Planetinaline commented 4 months ago

seems like the location for model parameter was wrong, it worked fine for me after i changed the relevent path to absolute path like this: def __init__(self, arch="ViT-H-14", version="/root/autodl-tmp/dreamgaussian_0/CLIP-ViT-H-14-laion2B-s32B-b79K/open_clip_pytorch_model.bin", device="cuda", max_length=77, freeze=True, layer="last"): super().__init__() assert layer in self.LAYERS model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)