lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.56k stars 839 forks source link

sd3 #858

Open Myoko opened 3 months ago

Myoko commented 3 months ago

The latest main branch f1.0.0v1.10.0rc-previous-14-g33e381f1 fails to load the sd3 model.

loading stable diffusion model: TypeError Traceback (most recent call last): File "F:\Content\Forge\launch.py", line 51, in main() File "F:\Content\Forge\launch.py", line 47, in main start() File "F:\Content\Forge\modules\launch_utils.py", line 549, in start main_thread.loop() File "F:\Content\Forge\modules_forge\main_thread.py", line 37, in loop task.work() File "F:\Content\Forge\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, *self.kwargs) File "F:\Content\Forge\modules\sd_models.py", line 572, in get_sd_model errors.display(e, "loading stable diffusion model", full_traceback=True) File "F:\Content\Forge\modules\sd_models.py", line 569, in get_sd_model load_model() File "F:\Content\Forge\modules\sd_models.py", line 668, in load_model sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict) File "F:\Content\Forge\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "F:\Content\Forge\modules_forge\forge_loader.py", line 150, in load_model_for_a1111 sd_model = instantiate_from_config(a1111_config.model) File "F:\Content\Forge\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "F:\Content\Forge\modules\models\sd3\sd3_model.py", line 28, in init self.model = BaseModel(shift=shift, state_dict=state_dict, prefix="model.diffusion_model.", device="cpu", dtype=devices.dtype) File "F:\Content\Forge\modules\models\sd3\sd3_impls.py", line 55, in init patch_size = state_dict[f"{prefix}x_embedder.proj.weight"].shape[2] TypeError: 'NoneType' object is not subscriptable

Stable diffusion model failed to load

To create a public link, set share=True in launch(). Startup time: 26.9s (prepare environment: 5.5s, launcher: 1.5s, import torch: 3.4s, setup paths: 2.1s, initialize shared: 0.2s, other imports: 0.8s, load scripts: 4.5s, create ui: 4.6s, gradio launch: 2.3s, app_started_callback: 1.9s). Loading weights [cc236278d2] from F:\Content\Forge\models\Stable-diffusion\SD3\sd3_medium.safetensors loading stable diffusion model: TypeError Traceback (most recent call last): File "F:\Content\Forge\python\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "F:\Content\Forge\python\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "", line 70, in run File "F:\Content\Forge\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "F:\Content\Forge\python\lib\site-packages\gradio\utils.py", line 818, in wrapper response = f(args, *kwargs) File "F:\Content\Forge\modules\ui.py", line 1156, in update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit") File "F:\Content\Forge\modules\shared_items.py", line 175, in sd_model return modules.sd_models.model_data.get_sd_model() File "F:\Content\Forge\modules\sd_models.py", line 572, in get_sd_model errors.display(e, "loading stable diffusion model", full_traceback=True) File "F:\Content\Forge\modules\sd_models.py", line 569, in get_sd_model load_model() File "F:\Content\Forge\modules\sd_models.py", line 668, in load_model sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict) File "F:\Content\Forge\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "F:\Content\Forge\modules_forge\forge_loader.py", line 150, in load_model_for_a1111 sd_model = instantiate_from_config(a1111_config.model) File "F:\Content\Forge\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "F:\Content\Forge\modules\models\sd3\sd3_model.py", line 28, in init self.model = BaseModel(shift=shift, state_dict=state_dict, prefix="model.diffusion_model.", device="cpu", dtype=devices.dtype) File "F:\Content\Forge\modules\models\sd3\sd3_impls.py", line 55, in init patch_size = state_dict[f"{prefix}x_embedder.proj.weight"].shape[2] TypeError: 'NoneType' object is not subscriptable

Stable diffusion model failed to load

lllyasviel commented 3 months ago

Newer dit Models are under construction now

Panchovix commented 3 months ago

Nice, really thanks @lllyasviel for that!

One question, will you add the model loading for SD3 from the forge backend or the A1111 backend? Because on the fork https://github.com/Panchovix/stable-diffusion-webui-reForge/commits/dev_upstream (with dev_upstream) has the code into model management, model supported, model detection, etc (basically all on ldm_patched, except I think sd.py) for the new model support (SD Cascade, SD3, AuraFlow, etc), but I haven't made them work because I think forge_loader and unet_patcher have to be patched somehow to support these. So it detects them correctly, but it doesn't load them.

kalle07 commented 3 months ago

at themoment (version: [f1.0.2v1.10.1-previous-47-gf052fabd]

Loading weights [cc236278d2] from E:\WebUI_Forge\webui\models\Stable-diffusion\stableDiffusion3SD3_sd3Medium.safetensors Traceback (most recent call last): File "E:\WebUI_Forge\webui\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\WebUI_Forge\webui\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) File "E:\WebUI_Forge\webui\modules\txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "E:\WebUI_Forge\webui\modules\processing.py", line 805, in process_images sd_models.reload_model_weights() File "E:\WebUI_Forge\webui\modules\sd_models.py", line 714, in reload_model_weights return load_model(info) File "E:\WebUI_Forge\webui\modules\sd_models.py", line 668, in load_model sd_model = forge_loader.load_model_for_a1111(timer=timer, checkpoint_info=checkpoint_info, state_dict=state_dict) File "e:\WebUI_Forge\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "E:\WebUI_Forge\webui\modules_forge\forge_loader.py", line 152, in load_model_for_a1111 sd_model = instantiate_from_config(a1111_config.model) File "E:\WebUI_Forge\webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "E:\WebUI_Forge\webui\modules\models\sd3\sd3_model.py", line 28, in init self.model = BaseModel(shift=shift, state_dict=state_dict, prefix="model.diffusion_model.", device="cpu", dtype=devices.dtype) File "E:\WebUI_Forge\webui\modules\models\sd3\sd3_impls.py", line 55, in init patch_size = state_dict[f"{prefix}x_embedder.proj.weight"].shape[2] TypeError: 'NoneType' object is not subscriptable 'NoneType' object is not subscriptable Error completing request Arguments: ('task(089aqosvewzqxco)', <gradio.route_utils.Request object at 0x0000024806C96E00>, 'futanari, penis', 'sketch, cartoon, anime, muscular', [], 1, 1, 4, 1204, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', None, 0, 40, 'Euler a', 'Exponential', False, '', 0.8, 3847952285, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n "face_detector": "RetinaFace",\n "rules": {\n "then": {\n "face_processor": "img2img",\n "mask_generator": {\n "name": "BiSeNet",\n "params": {\n "fallback_ratio": 0.1\n }\n }\n }\n }\n}', 'None', 40, False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, 'tab_single', False, 'None', 20, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 0.5, 2, False, 3, False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', '', 0, '', '', 0, '', '', True, False, False, False, False, False, False, 0, False, 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n "face_detector": "RetinaFace",\n "rules": {\n "then": {\n "face_processor": "img2img",\n "mask_generator": {\n "name": "BiSeNet",\n "params": {\n "fallback_ratio": 0.1\n }\n }\n }\n }\n}', 'None', 40) {} Traceback (most recent call last): File "E:\WebUI_Forge\webui\modules\call_queue.py", line 74, in f res = list(func(args, kwargs)) TypeError: 'NoneType' object is not iterable


Myoko commented 3 months ago

new flux model better than sd3. waiting @lllyasviel to add support

siriume commented 3 months ago

commit: 252d437 ERROR:

Loading weights [3bb7f21bc5] from /opt/dev/sd_forge/models/Stable-diffusion/sd3_medium_incl_clips.safetensors
StateDict Keys: {'unet': 491, 'vae': 244, 'ignore': 713}
Expected state_dict to be dict-like, got <class 'NoneType'>.
Traceback (most recent call last):
  File "/opt/dev/sd_forge/modules_forge/main_thread.py", line 37, in loop
    task.work()
  File "/opt/dev/sd_forge/modules_forge/main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "/opt/dev/sd_forge/modules/txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
  File "/opt/dev/sd_forge/modules/processing.py", line 776, in process_images
    sd_models.reload_model_weights()
  File "/opt/dev/sd_forge/modules/sd_models.py", line 604, in reload_model_weights
    return load_model(info)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/dev/sd_forge/modules/sd_models.py", line 562, in load_model
    sd_model = forge_loader(state_dict)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/dev/sd_forge/backend/loader.py", line 107, in forge_loader
    component = load_huggingface_component(estimated_config, component_name, lib_name, cls_name, local_path, component_sd)
  File "/opt/dev/sd_forge/backend/loader.py", line 52, in load_huggingface_component
    load_state_dict(model, state_dict, ignore_errors=[
  File "/opt/dev/sd_forge/backend/state_dict.py", line 5, in load_state_dict
    missing, unexpected = model.load_state_dict(sd, strict=False)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2140, in load_state_dict
    raise TypeError(f"Expected state_dict to be dict-like, got {type(state_dict)}.")
TypeError: Expected state_dict to be dict-like, got <class 'NoneType'>.
*** Error completing request
*** Arguments: ('task(0nd8xejsp3cc8ol)', <gradio.route_utils.Request object at 0x7fec0202c850>, 'a  cat', '', [], 1, 1, 4, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', None, 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 3, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', '', 0, '', '', 0, '', '', True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/opt/dev/sd_forge/modules/call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/fastapi/applications.py", line 1106, in __call__
    await super().__call__(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/gradio/route_utils.py", line 730, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/gradio/route_utils.py", line 746, in simple_response
    await self.app(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/fastapi/routing.py", line 274, in app
    raw_response = await run_endpoint_function(
  File "/opt/anaconda3/envs/forge/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/opt/dev/sd_forge/extensions/sd-webui-prompt-all-in-one/scripts/on_app_started.py", line 108, in _token_counter
    return get_token_counter(data['text'], data['steps'])
  File "/opt/dev/sd_forge/extensions/sd-webui-prompt-all-in-one/scripts/physton_prompt/get_token_counter.py", line 30, in get_token_counter
    cond_stage_model = sd_models.model_data.sd_model.cond_stage_model
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'