TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.52k stars 1.31k forks source link

Option to load a session with safetensor file instead of ckpt #1958

Open MaxTran96 opened 1 year ago

MaxTran96 commented 1 year ago

Hi, for create/load a session, can you add an option to load a safetensor file instead of ckpt?

TheLastBen commented 1 year ago

hi, you should know that the sessions models are always saved in ckpt

MaxTran96 commented 1 year ago

Let's say i want to load a custom model that is in safetensor to do dreambooth on that. How should i do that if the create/load a session only load ckpt file

TheLastBen commented 1 year ago

use the model cell to load a safetensors base model

MaxTran96 commented 1 year ago

by model cell, you mean that in the Model Download section? It's not on hugging face, it's just the safetensors file i got from https://civitai.com/models/31227/mega-model-22-controlnet-added-version?modelVersionId=37796

TheLastBen commented 1 year ago

you can paste the link to the model in "CKPT_Link" and check the box "safetensors"

SoultakerSpirit commented 1 year ago

Good thing I saw this post, lol. I'm able to load a safetensor, but when I try to generate any images. I get a ton of errors. Maybe I should do what you suggested and restart my session using the safetensor file I have uploaded to my google drive. EDIT: I did the trick you suggested, but when I ran it. I got errors 0% 0/20 [00:07<?, ?it/s] Error completing request Arguments: ('task(gb7by2hyvs73pbt)', '1boy, 1girl, french kissing', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x7f6d581b6ac0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f res = list(func(*args, kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 503, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 653, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 869, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in launch_sampling return func() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, *extra_args) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 152, in forward devices.test_for_nans(x_out, "unet") File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/devices.py", line 152, in test_for_nans raise NansException(message) modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

TheLastBen commented 1 year ago

it is fixed in the latest commit, update your notebook

SoultakerSpirit commented 1 year ago

Thanks. I just now did it when I woke up at 10am my time and ran the notebook before coming here. lol. Collab seems the only place I can use to make NSFW content for my game sense I need large pics.