lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.59k stars 847 forks source link

[BUG]: should be GGUF work with webui newest version ??? #2031

Closed kalle07 closed 1 month ago

kalle07 commented 1 month ago

grafik

Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored. [Unload] Trying to free 7494.80 MB for cuda:0 with 0 models keep loaded ... Current free memory is 6980.28 MB ... Unload model KModel Done. [Memory Management] Target: JointTextEncoder, Free GPU: 15077.85 MB, Model Require: 3935.23 MB, Previously Loaded: 0.00 MB, Inference Require: 2379.00 MB, Remaining: 8763.62 MB, All loaded to GPU. Moving model(s) has taken 3.50 seconds Distilled CFG Scale: 3.5 [Unload] Trying to free 12827.46 MB for cuda:0 with 0 models keep loaded ... Current free memory is 11077.62 MB ... Unload model JointTextEncoder Done. [Memory Management] Target: KModel, Free GPU: 15073.16 MB, Model Require: 8037.28 MB, Previously Loaded: 0.00 MB, Inference Require: 2379.00 MB, Remaining: 4656.88 MB, All loaded to GPU. Moving model(s) has taken 3.50 seconds 0%| | 0/25 [00:00<?, ?it/s] Traceback (most recent call last): File "E:\WebUI_Forge\webui\modules_forge\main_thread.py", line 30, in work self.result = self.func(*self.args, self.kwargs) File "E:\WebUI_Forge\webui\modules\txt2img.py", line 123, in txt2img_function processed = processing.process_images(p) File "E:\WebUI_Forge\webui\modules\processing.py", line 817, in process_images res = process_images_inner(p) File "E:\WebUI_Forge\webui\modules\processing.py", line 960, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\WebUI_Forge\webui\modules\processing.py", line 1337, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "E:\WebUI_Forge\webui\modules\sd_samplers_kdiffusion.py", line 238, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "E:\WebUI_Forge\webui\modules\sd_samplers_common.py", line 272, in launch_sampling return func() File "E:\WebUI_Forge\webui\modules\sd_samplers_kdiffusion.py", line 238, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\WebUI_Forge\webui\k_diffusion\sampling.py", line 129, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "E:\WebUI_Forge\webui\modules\sd_samplers_cfg_denoiser.py", line 199, in forward denoised, cond_pred, uncond_pred = sampling_function(self, denoiser_params=denoiser_params, cond_scale=cond_scale, cond_composition=cond_composition) File "E:\WebUI_Forge\webui\backend\sampling\sampling_function.py", line 362, in sampling_function denoised, cond_pred, uncond_pred = sampling_function_inner(model, x, timestep, uncond, cond, cond_scale, model_options, seed, return_full=True) File "E:\WebUI_Forge\webui\backend\sampling\sampling_function.py", line 303, in sampling_function_inner cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) File "E:\WebUI_Forge\webui\backend\sampling\sampling_function.py", line 273, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "E:\WebUI_Forge\webui\backend\modules\k_model.py", line 45, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, *extra_conds).float() File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "E:\WebUI_Forge\webui\backend\nn\flux.py", line 418, in forward out = self.inner_forward(img, img_ids, context, txt_ids, timestep, y, guidance) File "E:\WebUI_Forge\webui\backend\nn\flux.py", line 375, in inner_forward img = self.img_in(img) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "E:\WebUI_Forge\webui\backend\operations.py", line 432, in forward return torch.nn.functional.linear(x, weight, bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (2816x64 and 256x768) mat1 and mat2 shapes cannot be multiplied (2816x64 and 256x768)

l0stl0rd commented 1 month ago

I am curious as to why you did close it I got the same issue. Plus GGUF used to work and now nearly none do.

kalle07 commented 1 month ago

depend on GGUF ... seems some run some dont i hate FLUX with 10 versions and different types of CLIP and T5xxl

l0stl0rd commented 1 month ago

I agree that is one thing that bugs me about Flux too.

johnnykorm commented 1 week ago

Same problem with some gguf models.