Open Alisa121212 opened 1 month ago
I have the error bf16 is only supported on A100+ GPUs too. But what is surprising is that it was working well a month ago, the same GTX 1080 VRAM 11Gb videocard. requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) What could have happened?
When trying to start processing i get error:
Updating 668e87f9..c3366a76 error: Your local changes to the following files would be overwritten by merge: style.css Please commit your changes or stash them before you merge. Aborting venv "D:\StablediffFORGE\webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-501-g668e87f9 Commit hash: 668e87f920be30001bb87214d9001bf59f2da275 Launching Web UI with arguments: --xformers --opt-sdp-attention --medvram-sdxl --theme dark --medvram Arg --medvram is removed in Forge. Now memory management is fully automatic and you do not need any command flags. Please just remove this flag. In extreme cases, if you want to force previous lowvram/medvram behaviors, please use --always-offload-from-vram Arg --medvram-sdxl is removed in Forge. Now memory management is fully automatic and you do not need any command flags. Please just remove this flag. In extreme cases, if you want to force previous lowvram/medvram behaviors, please use --always-offload-from-vram Total VRAM 8192 MB, total RAM 32374 MB pytorch version: 2.3.1+cu121 xformers version: 0.0.27 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 2070 SUPER : native Hint: your device supports --cuda-malloc for potential speed improvements. VAE dtype preferences: [torch.float32] -> torch.float32 CUDA Using Stream: False Using xformers cross attention Using xformers attention for VAE ControlNet preprocessor location: D:\StablediffFORGE\webui\models\ControlNetPreprocessor [-] ADetailer initialized. version: 24.9.0, num models: 10 *** Error loading script: animatediff.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff.py", line 10, in
from scripts.animatediff_infv2v import AnimateDiffInfV2V
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_infv2v.py", line 5, in
from ldm_patched.modules.model_management import get_torch_device, soft_empty_cache
ModuleNotFoundError: No module named 'ldm_patched'
*** Error loading script: animatediff_infotext.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_infotext.py", line 6, in
from scripts.animatediff_ui import AnimateDiffProcess
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_ui.py", line 12, in
from scripts.animatediff_mm import mm_animatediff as motion_module
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
*** Error loading script: animatediff_infv2v.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_infv2v.py", line 5, in
from ldm_patched.modules.model_management import get_torch_device, soft_empty_cache
ModuleNotFoundError: No module named 'ldm_patched'
*** Error loading script: animatediff_latent.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_latent.py", line 10, in
from scripts.animatediff_ui import AnimateDiffProcess
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_ui.py", line 12, in
from scripts.animatediff_mm import mm_animatediff as motion_module
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
*** Error loading script: animatediff_mm.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
*** Error loading script: animatediff_output.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_output.py", line 15, in
from scripts.animatediff_ui import AnimateDiffProcess
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_ui.py", line 12, in
from scripts.animatediff_mm import mm_animatediff as motion_module
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
*** Error loading script: animatediff_prompt.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_prompt.py", line 7, in
from scripts.animatediff_infotext import write_params_txt
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_infotext.py", line 6, in
from scripts.animatediff_ui import AnimateDiffProcess
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_ui.py", line 12, in
from scripts.animatediff_mm import mm_animatediff as motion_module
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
*** Error loading script: animatediff_settings.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_settings.py", line 4, in
from scripts.animatediff_ui import supported_save_formats
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_ui.py", line 12, in
from scripts.animatediff_mm import mm_animatediff as motion_module
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
*** Error loading script: animatediff_ui.py Traceback (most recent call last): File "D:\StablediffFORGE\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "D:\StablediffFORGE\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_ui.py", line 12, in
from scripts.animatediff_mm import mm_animatediff as motion_module
File "D:\StablediffFORGE\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 5, in
from modules_forge.unet_patcher import UnetPatcher
ModuleNotFoundError: No module named 'modules_forge.unet_patcher'
2024-09-08 09:05:39,924 - ControlNet - INFO - ControlNet UI callback registered. D:\StablediffFORGE\webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py:414: GradioDeprecationWarning: unexpected argument for Dropdown: default version = gr.Dropdown( No config file found for 3d-render-v2. You can generate it in the LoRA tab. No config file found for Add More Details - Detail Enhancer. You can generate it in the LoRA tab. No config file found for add_detail. You can generate it in the LoRA tab. No config file found for aidma-Image Upgrader-v0.1. You can generate it in the LoRA tab. No config file found for aidmaImageUpgrader-FLUX-V0.1. You can generate it in the LoRA tab. No config file found for CharacterDesign-FluxV2. You can generate it in the LoRA tab. No config file found for COOLKIDS_MERGE_V2.5. You can generate it in the LoRA tab. No config file found for COOLKIDS_XL_0.3_RC. You can generate it in the LoRA tab. No config file found for Dressed animals. You can generate it in the LoRA tab. No config file found for F2D-000003. You can generate it in the LoRA tab. No config file found for flat childrenXX. You can generate it in the LoRA tab. No config file found for Flat style-000014. You can generate it in the LoRA tab. No config file found for flaticon_v1_2. You can generate it in the LoRA tab. No config file found for flat_illustration. You can generate it in the LoRA tab. No config file found for game_icon_v1.0. You can generate it in the LoRA tab. No config file found for hand v1 flux. You can generate it in the LoRA tab. No config file found for hand v1. You can generate it in the LoRA tab. No config file found for Harrlogos_v2.0. You can generate it in the LoRA tab. No config file found for Japanese_style_Minimalist_Line_Illustrations. You can generate it in the LoRA tab. No config file found for J_cartoon. You can generate it in the LoRA tab. No config file found for LogoRedmondV2-Logo-LogoRedmAF. You can generate it in the LoRA tab. No config file found for LowpolySDXL. You can generate it in the LoRA tab. No config file found for Minimalist_flat_icons_XL-000006. You can generate it in the LoRA tab. No config file found for more_details. You can generate it in the LoRA tab. No config file found for pixel-art-xl-v1.1. You can generate it in the LoRA tab. No config file found for pixel_f2. You can generate it in the LoRA tab. No config file found for S1-Kurzgesagt_Dreamlike-000008. You can generate it in the LoRA tab. No config file found for StickersRedmond. You can generate it in the LoRA tab. No config file found for Stylized_Setting_SDXL. You can generate it in the LoRA tab. No config file found for sxz-icons-v5. You can generate it in the LoRA tab. No config file found for sxz-texture-sdxl. You can generate it in the LoRA tab. No config file found for TShirtDesignRedmondV2-Tshirtdesign-TshirtDesignAF. You can generate it in the LoRA tab. No config file found for UiUX-SDXL. You can generate it in the LoRA tab. No config file found for vectorL. You can generate it in the LoRA tab. No config file found for Vector_illustration_V2. You can generate it in the LoRA tab. No config file found for VintageDrawing01-00_CE_SDXL_128OT. You can generate it in the LoRA tab. D:\StablediffFORGE\webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py:614: GradioDeprecationWarning: unexpected argument for Dropdown: default trt_lora_dropdown = gr.Dropdown( Model selected: {'checkpoint_info': {'filename': 'D:\StablediffFORGE\webui\models\Stable-diffusion\juggernautXL_v9Rundiffusionphoto2.safetensors', 'hash': '799b5005'}, 'additional_modules': ['D:\StablediffFORGE\webui\models\VAE\sdxlVAE_sdxlVAE.safetensors'], 'unet_storage_dtype': None} Using online LoRAs in FP16: False Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
. Startup time: 11.9s (prepare environment: 2.0s, import torch: 4.7s, other imports: 0.3s, load scripts: 1.8s, create ui: 1.9s, gradio launch: 1.1s). Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} [GPU Setting] You will use 87.50% GPU memory (7167.00 MB) to load weights, and use 12.50% GPU memory (1024.00 MB) to do matrix computation. [Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done. Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} model ignore: C:\Users\Alisa/.insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} model ignore: C:\Users\Alisa/.insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Alisa/.insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} model ignore: C:\Users\Alisa/.insightface\models\buffalo_l\genderage.onnx genderage Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Alisa/.insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5 set det-size: (640, 640) Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:04<00:00, 1.58it/s] Loading PhotoMaker v2 components [1] id_encoder from [D:\StablediffFORGE\webui\models\diffusers\models--TencentARC--PhotoMaker-V2\snapshots\f5a1e5155dc02166253fa7e29d13519f5ba22eac]... 4096 Loading PhotoMaker v2 components [2] lora_weights from [D:\StablediffFORGE\webui\models\diffusers\models--TencentARC--PhotoMaker-V2\snapshots\f5a1e5155dc02166253fa7e29d13519f5ba22eac] Forge Space: Moved 9255 Modules to cpu Automatic hook: T2IAdapter.forward Automatic hook: PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken.forward Automatic hook: CLIPTextModel.forward Automatic hook: CLIPTextModelWithProjection.forward Automatic hook: UNet2DConditionModel.forward Automatic hook: AutoencoderKL.forward Automatic hook: AutoencoderKL.encode Automatic hook: AutoencoderKL.decode Running on local URL: http://127.0.0.1:7861To create a public link, set
share=True
inlaunch()
. Entering Forge Space GPU ... [Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done. [Debug] Generate image using aspect ratio [Instagram (1:1)] => 1024 x 1024 Start inference... [Debug] Seed: 2122694186 [Debug] Prompt: instagram photo, portrait photo of a woman img, colorful, perfect face, natural skin, hard shadows, film grain, [Debug] Neg Prompt: (asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth 10 Use adapter: False | output size: (1024, 1024) [Memory Management] Current Free GPU Memory: 7075.54 MB [Memory Management] Required Model Memory: 234.72 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 5304.82 MB Move module to GPU: CLIPTextModel Move module to CPU: CLIPTextModel [Memory Management] Current Free GPU Memory: 7059.19 MB [Memory Management] Required Model Memory: 1324.96 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 4198.23 MB Move module to GPU: CLIPTextModelWithProjection Move module to CPU: CLIPTextModelWithProjection [Memory Management] Current Free GPU Memory: 7058.81 MB [Memory Management] Required Model Memory: 234.72 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 5288.09 MB Move module to GPU: CLIPTextModel Move module to CPU: CLIPTextModel [Memory Management] Current Free GPU Memory: 7058.58 MB [Memory Management] Required Model Memory: 1324.96 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 4197.62 MB Move module to GPU: CLIPTextModelWithProjection Move module to CPU: CLIPTextModelWithProjection [Memory Management] Current Free GPU Memory: 7059.11 MB [Memory Management] Required Model Memory: 234.72 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 5288.39 MB Move module to GPU: CLIPTextModel Move module to CPU: CLIPTextModel [Memory Management] Current Free GPU Memory: 7058.88 MB [Memory Management] Required Model Memory: 1324.96 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 4197.92 MB Move module to GPU: CLIPTextModelWithProjection Move module to CPU: CLIPTextModelWithProjection [Memory Management] Current Free GPU Memory: 7058.50 MB [Memory Management] Required Model Memory: 234.72 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 5287.78 MB Move module to GPU: CLIPTextModel Move module to CPU: CLIPTextModel [Memory Management] Current Free GPU Memory: 7058.28 MB [Memory Management] Required Model Memory: 1324.96 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 4197.32 MB Move module to GPU: CLIPTextModelWithProjection Move module to CPU: CLIPTextModelWithProjection [Memory Management] Current Free GPU Memory: 7057.25 MB [Memory Management] Required Model Memory: 1036.45 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 4484.80 MB Move module to GPU: PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken Move module to CPU: PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken [Memory Management] Current Free GPU Memory: 7056.27 MB [Memory Management] Required Model Memory: 5074.24 MB [Memory Management] Required Inference Memory: 1536.00 MB [Memory Management] Estimated Remaining GPU Memory: 446.03 MB Move module to GPU: UNet2DConditionModel Traceback (most recent call last): File "D:\StablediffFORGE\webui\venv\lib\site-packages\gradio\queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "D:\StablediffFORGE\webui\venv\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api output = await app.get_blocks().process_api( File "D:\StablediffFORGE\webui\venv\lib\site-packages\gradio\blocks.py", line 1923, in process_api result = await self.call_function( File "D:\StablediffFORGE\webui\venv\lib\site-packages\gradio\blocks.py", line 1508, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "D:\StablediffFORGE\webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\StablediffFORGE\webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\StablediffFORGE\webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "D:\StablediffFORGE\webui\venv\lib\site-packages\gradio\utils.py", line 818, in wrapper response = f(args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\gradio\utils.py", line 818, in wrapper response = f(*args, *kwargs) File "D:\StablediffFORGE\webui\spaces.py", line 164, in wrapper result = func(args, kwargs) File "D:\StablediffFORGE\webui\extensions-builtin\forge_space_photo_maker_v2\forge_app.py", line 160, in generate_image images = pipe( File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\StablediffFORGE\webui\extensions-builtin\forge_space_photo_maker_v2\huggingface_space_mirror\pipeline_t2i_adapter.py", line 860, in call noise_pred = self.unet( File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "D:\StablediffFORGE\webui\spaces.py", line 227, in patched_method return original_method(*args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1209, in forward sample, res_samples = downsample_block( File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1288, in forward hidden_states = attn( File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward hidden_states = block( File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\diffusers\models\attention.py", line 453, in forward attn_output = self.attn1( File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "D:\StablediffFORGE\webui\venv\lib\site-packages\diffusers\models\attention_processor.py", line 559, in forward return self.processor( File "D:\StablediffFORGE\webui\backend\attention.py", line 488, in call hidden_states = attention_function(query, key, value, heads=attn.heads, mask=attention_mask) File "D:\StablediffFORGE\webui\backend\attention.py", line 307, in attention_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask) File "D:\StablediffFORGE\webui\venv\lib\site-packages\xformers\ops\fmha__init.py", line 276, in memory_efficient_attention return _memory_efficient_attention( File "D:\StablediffFORGE\webui\venv\lib\site-packages\xformers\ops\fmha__init__.py", line 395, in _memory_efficient_attention return _memory_efficient_attention_forward( File "D:\StablediffFORGE\webui\venv\lib\site-packages\xformers\ops\fmha\init__.py", line 414, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "D:\StablediffFORGE\webui\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw return _run_priority_list( File "D:\StablediffFORGE\webui\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found formemory_efficient_attention_forward
with inputs: query : shape=(2, 4096, 10, 64) (torch.bfloat16) key : shape=(2, 4096, 10, 64) (torch.bfloat16) value : shape=(2, 4096, 10, 64) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0decoderF
is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUsflshattF@v2.5.7
is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUscutlassF
is not supported because: bf16 is only supported on A100+ GPUssmallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 64