Closed francqz31 closed 8 months ago
could you fix this?
not really , it is either a problem with colab's environment, or a problem in the existing code. I can't try it on my pc for now. so I have to wait for a response.
the first time i run !python gradio_demo.py it seems to download a 10.2gb file
Downloading (…)ip_pytorch_model.bin: 94% 9.56G/10.2G [09:16<00:22, 26.9MB/s
then I guess it tries to load into memory but with the memory that colab gives free it is not enough so it gives that error. It would be a shame if this is it.
You would have to try colab pro if you have colab pro you can try it.
The gradio demo is now deployed to HuggingFace space, available here.
Unlike gradio_demo in this repo, we useprompt_model,_,_ = open_clip.create_model_and_transforms('ViT-bigG-14', 'laion2b_s39b_b160k',device='cpu')
which use 'fp16' as default
@loboere @francqz31 I faced the same error on colab the free version but after upgrading to the pro version it worked. So the issue happens because of not enough RAM or VRAM
Hi Authors , Here is a full demo Colab that you can build on , but I get these 2 errors
https://colab.research.google.com/drive/13BIlDBZK4rcrzLxK3N7tamrPcvX6lAIB#scrollTo=HcBUTwmFjpUT
whenever i run the inference or the training scripts I get this error subprocess.CalledProcessError: Command '['/usr/local/bin/python3.10', 'train_t2i_custom_v2.py', '--config=configs/custom.py']' died with <Signals.SIGKILL: 9>.
and whenever i run this !python gradio_demo.py i get this error 2023-07-04 15:37:43.188 | DEBUG | open_clip.transformer:init:314 - xattn in transformer of CLIP is False 2023-07-04 15:38:02.273 | DEBUG | open_clip.transformer:init:314 - xattn in transformer of CLIP is False ^C I'm not sure what's happening but if you can identify the problem that would be amazing. Thanks in advance