jina-ai / discoart

🪩 Create Disco Diffusion artworks in one line
Other
3.84k stars 249 forks source link

RuntimeError: "clamp_min_cpu" not implemented for "Half" #187

Open OzzyD opened 1 year ago

OzzyD commented 1 year ago

Morning everyone;

I'm trying to run DiscoArt on a local machine, alas without a GPU. It's straight out of the box, so "pip install discoart", then start python and run "from discoart import create" and "create()" as can be seen below. However, I'm getting the error mentioned below (and in the title) as per the console output below of "clamp_min_cpu" not being implemented for "Half"

Can you advise on what's going on, and what I need to do to correct this (aside from getting a CUDA-enabled machine - funds don't support - or Google Colab)

Thanks in advance,

Oliver

Python 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type "help", "copyright", "credits" or "license" for more information.

from discoart import create create() discoart-6f5383004ada11edbb22a860b62d2fe9
╭────────────────────────────┬─────────────────────────────────────────────────╮ │ Argument │ Value │ ├────────────────────────────┼─────────────────────────────────────────────────┤ │ batch_name │ None │ │ batch_size │ 1 │ │ clamp_grad │ True │ │ clamp_max │ 0.05 │ │ clip_denoised │ False │ │ clip_guidance_scale │ 5000 │ │ clip_models │ ['ViT-B-32::openai', 'ViT-B-16::openai', │ │ │ 'RN50::openai'] │ │ clip_models_schedules │ None │ │ cut_ic_pow │ 1.0 │ │ cut_icgray_p │ [0.2]400+[0]600 │ │ cut_innercut │ [4]400+[12]600 │ │ cut_overview │ [12]400+[4]600 │ │ cut_schedules_group │ None │ │ cutn_batches │ 4 │ │ diffusion_model │ 512x512_diffusion_uncond_finetune_008100 │ │ diffusion_model_config │ None │ │ diffusion_sampling_mode │ ddim │ │ display_rate │ 1 │ │ eta │ 0.8 │ │ gif_fps │ 20 │ │ gif_size_ratio │ 0.5 │ │ image_output │ True │ │ init_image │ None │ │ init_scale │ 1000 │ │ n_batches │ 4 │ │ name_docarray │ discoart-6f5383004ada11edbb22a860b62d2fe9 │ │ on_misspelled_token │ ignore │ │ perlin_init │ False │ │ perlin_mode │ mixed │ │ rand_mag │ 0.05 │ │ randomize_class │ True │ │ range_scale │ 150 │ │ sat_scale │ 0 │ │ save_rate │ 20 │ │ seed │ 3729830824 │ │ skip_event │ None │ │ skip_steps │ 0 │ │ steps │ 250 │ │ stop_event │ None │ │ text_clip_on_cpu │ False │ │ text_prompts │ ['A beautiful painting of a singular │ │ │ lighthouse, shining its light across a │ │ │ tumultuous sea of blood by greg rutkowski and │ │ │ thomas kinkade, Trending on artstation.', │ │ │ 'yellow color scheme'] │ │ transformation_percent │ [0.09] │ │ truncate_overlength_prompt │ False │ │ tv_scale │ 0 │ │ use_horizontal_symmetry │ False │ │ use_secondary_model │ True │ │ use_vertical_symmetry │ False │ │ visualize_cuts │ False │ │ width_height │ [1280, 768] │ ╰────────────────────────────┴─────────────────────────────────────────────────╯ showing all args (bold * args are non-default)
/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/discoart/helper.py:129: UserWarning: !!!!CUDA is not available. DiscoArt is running on CPU. create() will be unbearably slow on CPU!!!! Please switch to a GPU device. If you are using Google Colab, then free tier would just work.

warnings.warn( 2022-10-13 10:36:28,663 - discoart - INFO - preparing models... Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off] /Users/studioadmin/Library/Python/3.8/lib/python/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead. warnings.warn( /Users/studioadmin/Library/Python/3.8/lib/python/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Loading model from: /Users/studioadmin/Library/Python/3.8/lib/python/site-packages/lpips/weights/v0.1/vgg.pth 2022-10-13 10:36:31,281 - discoart - INFO - W&B dashboard is disabled. To enable the online dashboard for tracking losses, gradients, scheduling tracking, please set WANDB_MODE=online before running/importing DiscoArt. e.g.

import os
os.environ['WANDB_MODE'] = 'online'

from discoart import create
create(...)

2022-10-13 10:36:31,281 - discoart - INFO - creating artworks discoart-6f5383004ada11edbb22a860b62d2fe9 (0/4)... 0%| | 0/250 [03:06<?, ?it/s] Traceback (most recent call last): File "", line 1, in File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/discoart/create.py", line 217, in create da = do_run( File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/discoart/runner.py", line 414, in do_run for j, sample in enumerate(samples): File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/guided_diffusion/gaussian_diffusion.py", line 897, in ddim_sample_loop_progressive out = sample_fn( File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/guided_diffusion/gaussian_diffusion.py", line 674, in ddim_sample out = self.condition_score(cond_fn, out_orig, x, t, model_kwargs=model_kwargs) File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/guided_diffusion/respace.py", line 102, in condition_score return super().condition_score(self._wrap_model(cond_fn), *args, *kwargs) File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/guided_diffusion/gaussian_diffusion.py", line 399, in condition_score eps = eps - (1 - alpha_bar).sqrt() cond_fn( File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/guided_diffusion/respace.py", line 128, in call return self.model(x, new_ts, **kwargs) File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/discoart/runner.py", line 207, in cond_fn masked_weights = normalize_fn( File "/Users/studioadmin/Library/Python/3.8/lib/python/site-packages/torch/nn/functional.py", line 4620, in normalize denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input) RuntimeError: "clamp_min_cpu" not implemented for 'Half'

ayoubachak commented 1 year ago

Same problem here! I'm running it on collab free tier.

mp075496706 commented 1 year ago

I also encountered this problem, but I seem to have found the problem. At the beginning, I also used "pip install discoard" to install the entire project library. After running, this error is displayed. But I opened the task manager of the computer and found that the CPU utilization rate reached 100%. I think this is abnormal. It should occupy the GPU. So I went to README.md to check if the steps were incorrect. I see that PyTorch supporting CUDA is mentioned in the article, so I think the problem lies here. My steps to solve it are as follows: Pip uninstall the existing torch and torch vision Go to re download the torch and torch vision that match my Python version( https://download.pytorch.org/whl/torch_stable.html ) Download the NVIDIA CUDA ToolKit package for my video card (rtx3080ti) After all the installations are completed, run again. I started painting in the local area perfectly, so I guess the reason for this problem is that the local torch and torch vision installed from the pip installation use the CPU, and you need to install the CUDA version of torch and torch vision.

CaiCaiXian commented 1 year ago

maybe you left the start param:--gpus all

OleguerCanal commented 1 year ago

I think it is a PyTorch issue. There are some operations such as ReLu that you cannot do in CPU using half precision. You can either use full precision or run the model on the GPU