hotshotco / Hotshot-XL

✨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XL
https://hotshot.co
Apache License 2.0
982 stars 77 forks source link

ValueError: unet/hotshot_xl.py as defined in `model_index.json` does not exist in hotshotco/Hotshot-XL #29

Open billzhao9 opened 7 months ago

billzhao9 commented 7 months ago

diffusers/pipelines/pipeline_utils.py", line 1680, in download """ ValueError: unet/hotshot_xl.py as defined in model_index.json does not exist in hotshotco/Hotshot-XL and is not a module in 'diffusers/pipelines'.

Somehow, when I first launched the inference, it could not load the model properly.

Skquark commented 7 months ago

I've also been getting that error all this time, was wondering why nobody's addressed it because everything seems everything else is in place correctly. Tried to track it down from the huggingface model it's trying to load with the default pretrained_path, and looked normal except there was no unet/hotshot_xl.py file as stated. Looks all safetensors based and it's trying to load it in older Diffusers format. I'm thinking on the PipelineClass.from_pretrained( line you need to add param use_safetensors=True, could be that easy. It's possible it's not working for me because I'm using latest diffusers instead of v0.21.4 that's pinned in the requirements. Here's the full error:

File "/content/Hotshot-XL/inference.py", line 231, in <module>
main()
File "/content/Hotshot-XL/inference.py", line 169, in main
pipe = PipelineClass.from_pretrained(args.pretrained_path, **pipe_line_args).to(device)
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1074, in from_pretrained
cached_folder = cls.download(
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py", line 1680, in download
raise ValueError(
ValueError: unet/hotshot_xl.py as defined in `model_index.json` does not exist in hotshotco/Hotshot-XL and is not a module in 'diffusers/pipelines'.

I peeked at the code for the auto1111 implementation and saw they were loading the model directly from hsxl_temporal_layers.f16.safetensors file they want you to manually download to models folder.. So I tried downloading that safetensors file and putting it in the pretrained_path var, but it wouldn't load it locally, trying to get that path from huggingface.co so it'd needed to be loaded from_single_file instead for that to work. Any suggestions? I've almost got it working in my UI at https://DiffusionDeluxe.com with a bunch of other Video AIs, so it'd be nice to get it off my checklist and start playing with it. Thanks..

johnmullan commented 7 months ago

What's the repro steps for this? Is this with latest diffusers? I suspect something has changed in that which is breaking the reference to the unet model pointed to by here https://huggingface.co/hotshotco/Hotshot-XL/blob/main/model_index.json#L25

Skquark commented 7 months ago

To reproduce, it's basically running the inference.py command with params pretty much the same as docs and examples. I can show you my full code leading up to it, but I don't expect it has anything to do with this particular error. Running it in my DiffusionDeluxe app with the open code buried in there to give it a UI and manually installing dependencies instead of running requirements.txt installs, but the problem is when it looks for /unet/hotshot_xl.py file. I don't know as much about the architecture of model files, but a guess would be in the model_index.json unet property to change "hotshot_xl" to "diffusers" like I see in most other SDXL model files. Side note, I tested out my theory of adding use_safetensors=True to that line loading the from_pretrained but that option isn't there in the pipeline as it is in the other diffusers, so that's not the answer...

donc-py commented 7 months ago

Yes happen same to me , issue in diffusers files

aylum1234 commented 7 months ago

works with !pip install diffusers==0.21.4 came here from those colab notebooks https://github.com/camenduru/Hotshot-XL-colab