sALTaccount / VAE-BlessUp

A tool to easily modify a Stable Diffusion VAE
87 stars 2 forks source link

need your help #3

Closed raykang-nsuslab closed 1 year ago

raykang-nsuslab commented 1 year ago

First, Thank you for your work

image

HI, I got this error, could you please help me out..

I was trying to adjust 'kl-f8-anime2.ckpt' vae file 's contrast to 1.4 and brightness to 1.2 (I'm not sure I wrote this argu properly)

Tried to launch with these arguments ` G:\VAE-BlessUp-master>G:\VAE-BlessUp-master\bless_vae.py --model_path G:\VAE-BlessUp-master\in\kl-f8-anime2.ckpt --model_type compvis --output_path G:\VAE-BlessUp-master\out\ --output_type diffusers --contrast 1.4 --contrast_operation mul --brightness 1.2 --brightness_operation mul --patch_encoder Loading model... Traceback (most recent call last): File "G:\VAE-BlessUp-master\bless_vae.py", line 28, in vae = torch.load(args.model_path) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1131, in _load result = unpickler.load() File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1101, in persistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1083, in load_tensor wrap_storage=restore_location(storage, location), File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 215, in default_restore_location result = fn(storage, location) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize device = validate_cuda_device(location) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

G:\VAE-BlessUp-master>G:\VAE-BlessUp-master\bless_vae.py --model_path G:\VAE-BlessUp-master\in\kl-f8-anime2.ckpt --model_type compvis --output_path G:\VAE-BlessUp-master\out\ --output_type compvis --contrast 1.4 --contrast_operation mul --brightness 1.2 --brightness_operation mul --patch_encoder Loading model... Traceback (most recent call last): File "G:\VAE-BlessUp-master\bless_vae.py", line 28, in vae = torch.load(args.model_path) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1131, in _load result = unpickler.load() File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1101, in persistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 1083, in load_tensor wrap_storage=restore_location(storage, location), File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 215, in default_restore_location result = fn(storage, location) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize device = validate_cuda_device(location) File "C:\Users\zi_zi\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.`

Win11 RTX 4090

raykang-nsuslab commented 1 year ago

Package Version


absl-py 1.4.0 accelerate 0.15.0 addict 2.4.0 aenum 3.1.11 aiofiles 22.1.0 aiohttp 3.8.3 aiosignal 1.3.1 albumentations 1.3.0 altair 4.2.2 antlr4-python3-runtime 4.9.3 anyio 3.6.2 astunparse 1.6.3 async-timeout 4.0.2 attrs 22.2.0 basicsr 1.4.2 bbid 1.0 beautifulsoup4 4.11.1 bing-image-downloader 1.1.2 bitsandbytes 0.35.0 blendmodes 2022 boltons 21.0.0 cachetools 5.2.1 certifi 2022.12.7 chardet 4.0.0 charset-normalizer 2.1.1 clean-fid 0.1.29 click 8.1.3 clip 1.0 colorama 0.4.6 contourpy 1.0.7 cycler 0.11.0 dadaptation 1.5 deprecation 2.1.0 diffusers 0.14.0 distlib 0.3.6 easygui 0.98.3 einops 0.6.0 entrypoints 0.4 facexlib 0.2.5 fairscale 0.4.13 fastapi 0.89.1 ffmpy 0.3.0 filelock 3.9.0 filterpy 1.4.5 flatbuffers 23.1.4 font-roboto 0.0.1 fonts 0.0.3 fonttools 4.38.0 frozenlist 1.3.3 fsspec 2023.1.0 ftfy 6.1.1 future 0.18.3 gast 0.4.0 gdown 4.6.0 gfpgan 1.3.8 gitdb 4.0.10 GitPython 3.1.27 google-auth 2.16.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 gradio 3.19.1 grpcio 1.51.1 h11 0.12.0 h5py 3.8.0 httpcore 0.15.0 httpx 0.23.3 huggingface-hub 0.12.0 idna 2.10 imageio 2.24.0 importlib-metadata 6.1.0 inflection 0.5.1 Jinja2 3.1.2 joblib 1.2.0 jsonmerge 1.8.0 jsonschema 4.17.3 keras 2.10.0 Keras-Preprocessing 1.1.2 kiwisolver 1.4.4 kornia 0.6.7 lark 1.1.2 libclang 15.0.6.1 library 1.0.2 lightning-utilities 0.8.0 linkify-it-py 1.0.3 lion-pytorch 0.0.6 llvmlite 0.39.1 lmdb 1.4.0 lpips 0.1.4 lycoris-lora 0.1.2 Markdown 3.4.1 markdown-it-py 2.1.0 MarkupSafe 2.1.2 matplotlib 3.6.3 mdit-py-plugins 0.3.3 mdurl 0.1.2 multidict 6.0.4 networkx 3.0 numba 0.56.4 numpy 1.23.3 oauthlib 3.2.2 omegaconf 2.3.0 open-clip-torch 2.7.0 opencv-python 4.7.0.68 opencv-python-headless 4.7.0.72 opt-einsum 3.3.0 orjson 3.8.5 packaging 23.0 pandas 1.5.3 piexif 1.1.3 Pillow 9.4.0 pip 22.2.2 platformdirs 3.0.0 protobuf 3.19.6 psutil 5.9.4 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycryptodome 3.16.0 pydantic 1.10.4 pyDeprecate 0.3.2 pydub 0.25.1 pyparsing 3.0.9 PyQt5 5.15.7 PyQt5-Qt5 5.15.2 PyQt5-sip 12.11.1 pyrsistent 0.19.3 PySocks 1.7.1 python-dateutil 2.8.2 python-multipart 0.0.4 pytorch-lightning 1.9.0 pytz 2022.7.1 PyWavelets 1.4.1 PyYAML 6.0 qudida 0.0.4 realesrgan 0.3.0 regex 2022.10.31 requests 2.28.2 requests-oauthlib 1.3.1 resize-right 0.0.2 rfc3986 1.5.0 rsa 4.9 safetensors 0.2.6 scikit-image 0.19.2 scikit-learn 1.2.2 scipy 1.10.0 sentencepiece 0.1.97 setuptools 63.2.0 six 1.16.0 smmap 5.0.0 sniffio 1.3.0 soupsieve 2.3.2.post1 starlette 0.22.0 tb-nightly 2.12.0a20230120 tensorboard 2.10.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.10.1 tensorflow-estimator 2.10.0 tensorflow-io-gcs-filesystem 0.29.0 termcolor 2.2.0 threadpoolctl 3.1.0 tifffile 2022.10.10 timm 0.6.12 tk 0.1.0 tokenizers 0.12.1 toml 0.10.2 toolz 0.12.0 torch 1.13.1 torchdiffeq 0.2.3 torchmetrics 0.11.0 torchsde 0.2.5 torchvision 0.13.1+cu113 tqdm 4.64.1 trampoline 0.1.2 transformers 4.27.3 typing_extensions 4.4.0 uc-micro-py 1.0.1 urllib3 1.26.14 uvicorn 0.20.0 virtualenv 20.19.0 voluptuous 0.13.1 wcwidth 0.2.6 websockets 10.4 Werkzeug 2.2.2 wheel 0.38.4 wrapt 1.14.1 yapf 0.32.0 yarl 1.8.2 zipp 3.11.0

making virtualenv. and try again

venv didnt work

raykang-nsuslab commented 1 year ago

This one was fine, maybe should I try other vae?

(venv) G:\VAE-BlessUp-master>G:\VAE-BlessUp-master\bless_vae.py --model_path G:\VAE-BlessUp-master\in\vae-ft-mse-840000-ema-pruned.ckpt --model_type compvis --output_path G:\VAE-BlessUp-master\out\afaf.ckpt --output_type compvis --contrast 1.4 --contrast_operation mul --brightness 1.2 --brightness_operation mul --patch_encoder Loading model... Applying modifications... Saving... Reshaping encoder.mid.attn_1.q.weight for SD format Reshaping encoder.mid.attn_1.k.weight for SD format Reshaping encoder.mid.attn_1.v.weight for SD format Reshaping encoder.mid.attn_1.proj_out.weight for SD format Reshaping decoder.mid.attn_1.q.weight for SD format Reshaping decoder.mid.attn_1.k.weight for SD format Reshaping decoder.mid.attn_1.v.weight for SD format Reshaping decoder.mid.attn_1.proj_out.weight for SD format Done!

(venv) G:\VAE-BlessUp-master>

sALTaccount commented 1 year ago

I believe that diffusers defaults to using GPU. Your torch version doesn't have GPU support, so when you are trying to save as diffusers format you are getting an error about trying to use the GPU. If you install the correct version of torch for your GPU (if you have one), it will work fine. There is no advantage to sending the model to the GPU so I will consider changing the code to be CPU only.