IceClear / StableSR

[IJCV2024] Exploiting Diffusion Prior for Real-World Image Super-Resolution
https://iceclear.github.io/projects/stablesr/
Other
2.06k stars 128 forks source link

Google colab not working even with PRO and high-RAM #43

Open tralala87 opened 1 year ago

tralala87 commented 1 year ago

Traceback (most recent call last): File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 319, in main() File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 238, in main img_list_ori = os.listdir(opt.init_img) NotADirectoryError: [Errno 20] Not a directory: 'inputs/user_upload/frame162.jpg'

color correction>>>>>>>>>>> Use adain color correction

Loading model from ./vqgan_cfw_00011.ckpt Global Step: 18000 WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1 with CUDA None (you have 2.0.1+cu117) Python 3.10.11 (you have 3.10.10) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details /usr/local/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional. warnings.warn( making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 64, 64) = 16384 dimensions. making attention of type 'vanilla' with 512 in_channels /usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( /usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth missing>>>>>>>>>>>>>>>>>>>

IceClear commented 1 year ago

Your env is not correct. You should run following the demo step by step and make sure each line finishes correctly.

tralala87 commented 1 year ago

Hmm.. I just ran the colab as it is.. didn't change anything

IceClear commented 1 year ago

The log info shows that your pytorch ver is not correct. I guess u didn't run some lines successfully.

tralala87 commented 1 year ago

Ok. I have installed the correct versions of dependencies. Now I get:

File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py", line 19, in from ldm.util import instantiate_from_config ModuleNotFoundError: No module named 'ldm'

when running the this cell:

VQGANTILE_STRIDE = int(VQGANTILE_SIZE * 0.9) if Enable_Tile: !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain' elif Aggregation_Sampling: !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --colorfix_type 'adain' else: !python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --seed {SEED} --colorfix_type 'adain'

IceClear commented 1 year ago

It is still env problem. You need to make sure all this part works properly: image

tralala87 commented 1 year ago

Am.. I click the button to run this cell.. Are there any other ways to make this part work properly? I have the ldm folder in my colab runtum, but for some reason the !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain' can't find it or access it.. but it's colab, so it can't be administrator access stuff..

tralala87 commented 1 year ago

btw, I use Windows, if that matters..

IceClear commented 1 year ago

Am.. I click the button to run this cell.. Are there any other ways to make this part work properly? I have the ldm folder in my colab runtum, but for some reason the !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain' can't find it or access it.. but it's colab, so it can't be administrator access stuff..

That is because you did not successfully pip install -e . all the things. You can pay attention to the command line to make sure there is no error info. You also need to make sure the cell has been totally finished before you run the next one.

IceClear commented 1 year ago

btw, I use Windows, if that matters..

Colab runs on the website so that does not matter.

tralala87 commented 1 year ago

Am.. I click the button to run this cell.. Are there any other ways to make this part work properly? I have the ldm folder in my colab runtum, but for some reason the !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain' can't find it or access it.. but it's colab, so it can't be administrator access stuff..

That is because you did not successfully pip install -e . all the things. You can pay attention to the command line to make sure there is no error info. You also need to make sure the cell has been totally finished before you run the next one.

Ok.. but I have clicked to run the colab code provided. I did not change anything. Please double check, if the colab is correctly written.

IceClear commented 1 year ago

I suppose it should work since I used to run it successfully and I did not change env settings. I don't have colab pro now so I am sry I couldn't do further checking. Maybe someone else could help if possible:)

tralala87 commented 1 year ago

I have found, that I needed to restart the kernel. However, now I am getting a different error, despite having latest pytorch_lightning installed, as instructed by the colab:

Global seed set to 42

color correction>>>>>>>>>>> Use adain color correction

Loading model from ./stablesr_000117.ckpt Global Step: 16500 Traceback (most recent call last): File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py", line 422, in main() File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py", line 259, in main model = load_model_from_config(config, f"{opt.ckpt}") File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py", line 93, in load_model_from_config model = instantiate_from_config(config.model) File "/content/StableSR/ldm/util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "/content/StableSR/ldm/util.py", line 101, in get_obj_from_str return getattr(importlib.import_module(module, package=None), cls) File "/usr/local/lib/python3.10/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "/content/StableSR/ldm/models/diffusion/ddpm.py", line 19, in from pytorch_lightning.utilities.distributed import rank_zero_only ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'

IceClear commented 1 year ago

Hi~The pytorch lighting ver should be 1.4.2.... If you follow the demo, there should be no version errors.

tralala87 commented 1 year ago

I just ran the google colab without any changes, and I get:

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 25600, 1, 512) (torch.float16) key : shape=(1, 25600, 1, 512) (torch.float16) value : shape=(1, 25600, 1, 512) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 cutlassF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info flshattF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 requires A100 GPU smallkF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 has custom scale Operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 512

IceClear commented 1 year ago

It seems like the problem of the version of xformer. Sry that I do not know what is wrong on your side. The version info is all included in the env file and there should be no problem.

tralala87 commented 1 year ago

I tried again today, opened the colab in my colab pro, made sure it's set on high ram, but am still getting the same error. I didn't change anything.

karellat commented 1 year ago

Replacing xformer installation or building with pre-built library xformers==0.0.16rc425, works for me: !pip install xformers==0.0.16rc425

7ywx commented 7 months ago

用预构建的库xformers==0.0.16rc425替换 xformer 安装或构建,对我有用: !pip install xformers==0.0.16rc425

Are you use windows?

7ywx commented 7 months ago

顺便说一句,我使用Windows,如果这很重要的话..

Colab 在网站上运行,所以这并不重要。 Is ok work in windows? triton and xformers cannot install, I am currently working on Windows platform and encountered challenges while attempting to install Triton and Xformers. Could you please provide guidance or suggestions on how to install these components on non-Linux systems? Any assistance or insights you can offer would be greatly appreciated.