Closed jadechoghari closed 2 months ago
Unable to see the error logs but from the requirements, it looks like torchao is not installed from source and pytorch nightly isn't used. These are currently suggested as the expected env settings. Would be more helpful if you could paste the error logs here :)
Sure! ===== Application Startup at 2024-09-14 22:43:59 =====
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache()
.
0it [00:00, ?it/s]
0it [00:00, ?it/s]
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache()
.
0it [00:00, ?it/s] 0it [00:00, ?it/s]
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]You set add_prefix_space
. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 43%|████▎ | 3/7 [00:01<00:01, 2.93it/s] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 2.59it/s]
Loading pipeline components...: 86%|████████▌ | 6/7 [00:06<00:01, 1.16s/it]
Loading pipeline components...: 100%|██████████| 7/7 [00:07<00:00, 1.01s/it]
ERR: subclass doesn't implement <method 'detach' of 'torch._C.TensorBase' objects>
Traceback (most recent call last):
File "/home/user/app/app.py", line 13, in
Ah yes, this is a known error and was fixed here. You would also need to install accelerate
from source until the release
Edit: Sorry might not be the same error (I misread the one you pasted). In the case that installing from source does not help, I'll try taking a deeper look soon unless Sayak's here before me
thanks @a-r-r-o-w - yes same error :(
Can you try with torch nightly? Also, torchao
installed from source. All our experiments are from torch nightly which we make it clear from our README.
Okay, I was able to replicate this on a colab notebook. I'm not up-to-date on the torchao changes, but something definitely seems to be broken due to latest changes. I've not pinned down the exact cause yet.
To fix, I installed torchao from a version I know that works. Could you try this:
USE_CPP=0 pip install git+https://github.com/pytorch/ao@cfabc13e72fd03934e62a2a03903bc1678235bed
And ofcourse, please use torch nightly alongside this
thanks both! i updated the requirements.txt: https://huggingface.co/spaces/jadechoghari/flux-kiwi/blob/main/requirements.txt Still same issue. It's ok, could be something with gradio env i guess...
So, you don't see the issue when using it without gradio?
Hmm, not sure why you would be getting the same error. Here's my Colab notebook using the same torchao commit which fully runs on Colab without errors or OOMs: https://colab.research.google.com/drive/1DUffhcjrU-uz7_cpuJO3E_D4BaJT7OPa?usp=sharing
@a-r-r-o-w Yup, the notebook is reproducible! It's mainly for Flux when we deploy it on HF Spaces.
Just to confirm, does the requirements.txt
look good?
diffusers
torch --pre -f https://download.pytorch.org/whl/nightly/cu122/torch_nightly.html
gradio==3.35.2
torchao @ git+https://github.com/pytorch/ao@cfabc13e72fd03934e62a2a03903bc1678235bed
transformers
sentencepiece
git+https://github.com/huggingface/accelerate
optimum-quanto
peft
@sayakpaul the issue comes up when we deploy on HF Spaces — I'll dig deeper to figure out why. If I find a fix, I’ll let you know! didn't test the script on Colab yet since Flux is pretty large.
Are you using zeroGPU? Could it be because of that?
Yes, I believe that could be the * issue. @sayakpaul, do you think it deserves to be investigated further or should we close this issue?
I think this could be reported to the Zero GPU maintainers in isolation. Closing the issue hence.
Hi @sayakpaul - awesome work for diffusers as always :), wanted to put it as a demo on gradio (ZeroGPU) in inference: https://huggingface.co/spaces/jadechoghari/flux-kiwi/tree/main but getting the following error:
For error logs: https://huggingface.co/spaces/jadechoghari/flux-kiwi/tree/main?logs=container
Env: inside requirements.txt: https://huggingface.co/spaces/jadechoghari/flux-kiwi/blob/main/requirements.txt
Thanks!