Open pondloso opened 1 year ago
Same issue here, on Colab.
now it fix but is oom for me when process. How to enable xformers?
..........
update I manage to install xformers and get to generate 1st Key Frame but it stuck at complete 100% not continue not show any error just stuck there and my vram still full use maybe my GPU not had enough vraw
Update einops, e.g pip install --upgrade einops
now it fix but is oom for me when process. How to enable xformers?
..........
update I manage to install xformers and get to generate 1st Key Frame but it stuck at complete 100% not continue not show any error just stuck there and my vram still full use maybe my GPU not had enough vraw
how did you fix this error? And how did you get xformers working?
Update einops, e.g
pip install --upgrade einops
I already did this and still getting error
Fixed by updating einops as mentioned above.
pip install --upgrade einops
=============================================
I am also running into something similar - running on Win10 with 3060 Ti GPU.
X:\AI\Rerender_A_Video>python rerender.py --cfg config/real2sculpture.json
logging improved.
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loaded model config from [./deps/ControlNet/models/cldm_v15.yaml]
Loaded state_dict from [./models/control_sd15_canny.pth]
C:\Users\Cain\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(filename, framework="pt", device=device) as f:
Traceback (most recent call last):
File "X:\AI\Rerender_A_Video\rerender.py", line 462, in
now it fix but is oom for me when process. How to enable xformers? .......... update I manage to install xformers and get to generate 1st Key Frame but it stuck at complete 100% not continue not show any error just stuck there and my vram still full use maybe my GPU not had enough vraw
how did you fix this error? And how did you get xformers working?
i build new environment for this and install xformers in it but you also need to install new version of torch and cuda in new environment because it had old version so it not complied.
now it fix but is oom for me when process. How to enable xformers? .......... update I manage to install xformers and get to generate 1st Key Frame but it stuck at complete 100% not continue not show any error just stuck there and my vram still full use maybe my GPU not had enough vraw
how did you fix this error? And how did you get xformers working?
i build new environment for this and install xformers in it but you also need to install new version of torch and cuda in new environment because it had old version so it not complied.
I did compile new CUDA based torch in the venv environment. It was created by converting the conda yaml to requirements.txt for pip since I don't use conda.
Even after doing all this, the interface starts and I can produce the first keyframe which looks nice but the "keras.backend" Error still happens.
now it fix but is oom for me when process. How to enable xformers? .......... update I manage to install xformers and get to generate 1st Key Frame but it stuck at complete 100% not continue not show any error just stuck there and my vram still full use maybe my GPU not had enough vraw
how did you fix this error? And how did you get xformers working?
i build new environment for this and install xformers in it but you also need to install new version of torch and cuda in new environment because it had old version so it not complied.
I did compile new CUDA based torch in the venv environment. It was created by converting the conda yaml to requirements.txt for pip since I don't use conda.
Even after doing all this, the interface starts and I can produce the first keyframe which looks nice but the "keras.backend" Error still happens.
I had the same problem. And I tried to reinstall tensorflow and keras to different compatible versions. It doesn't help me. But I resolved this problem (its not a godd but it work for me now):
astunparse blendmodes accelerate basicsr fonts font-roboto gfpgan gradio==3.28.1 numpy omegaconf opencv-contrib-python requests piexif Pillow pytorch_lightning==1.7.7 realesrgan scikit-image>=0.19 timm==0.4.12 transformers==4.25.1 torch einops jsonmerge clean-fid resize-right torchdiffeq kornia lark inflection GitPython torchsde safetensors psutil rich
accelerate==0.23.0
aiofiles==23.2.1 altair==4.2.2 dadaptation==3.1 diffusers[torch]==0.21.4 easygui==0.98.3 einops==0.6.0 fairscale==0.4.13 ftfy==6.1.1 gradio==3.36.1 huggingface-hub==0.15.1
invisible-watermark==0.2.0 lion-pytorch==0.0.6 lycoris_lora==1.9.0
onnx==1.14.1 onnxruntime-gpu==1.16.0
protobuf==3.20.3
open-clip-torch==2.20.0 opencv-python==4.7.0.68 prodigyopt==1.0 pytorch-lightning==1.9.0 rich==13.4.1 safetensors==0.3.1 timm==0.6.12 tk==0.1.0 toml==0.10.2 transformers==4.30.2 voluptuous==0.13.1 wandb==0.15.11
-e . # no_verify leave this to specify not checking this a verification stage
And now it works.
thank you! "pip install --upgrade einops" it's useful for AttributeError: module 'keras.backend' has no attribute 'is_tensor' in the deployment of latent-diffusion
I fixed it by running:
pip install --upgrade einops
No module 'xformers'. Proceeding without it. ControlLDM: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Loaded model config from [./deps/ControlNet/models/cldm_v15.yaml] Loaded state_dict from [./models/control_sd15_canny.pth] C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() with safe_open(filename, framework="pt", device=device) as f: Traceback (most recent call last): File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\queueing.py", line 388, in call_prediction output = await route_utils.call_process_api( File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api output = await app.get_blocks().process_api( File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1437, in process_api result = await self.call_function( File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1109, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\utils.py", line 641, in wrapper response = f(args, *kwargs) File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "F:\renderer\Rerender_AVideo\webUI.py", line 286, in process1 img = numpy2tensor(img) File "F:\renderer\Rerender_A_Video\src\img_util.py", line 23, in numpy2tensor return einops.rearrange(x0, 'b h w c -> b c h w').clone() File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\einops.py", line 425, in rearrange return reduce(tensor, pattern, reduction='rearrange', axes_lengths) File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\einops.py", line 369, in reduce return recipe.apply(tensor) File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\einops.py", line 204, in apply backend = get_backend(tensor) File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops_backends.py", line 49, in get_backend if backend.is_appropriate_type(tensor): File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops_backends.py", line 513, in is_appropriate_type return self.K.is_tensor(tensor) and self.K.is_keras_tensor(tensor) AttributeError: module 'keras.backend' has no attribute 'is_tensor' ..........
Try ti reinstall kerra and TensorFlow many time no help