Closed DA-Charlie closed 8 months ago
i tried the main and dev versions both throw that same error (#517 ?)
No response
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Launching Web UI with arguments: --always-normal-vram --api --xformers --pin-shared-memory --cuda-malloc --cuda-stream --ckpt-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Stable-diffusion --hypernetwork-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/hypernetworks --embeddings-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/embeddings --lora-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Lora Using cudaMallocAsync backend. Total VRAM 4096 MB, total RAM 40628 MB WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' xformers version: 0.0.23.post1 Set vram state to: NORMAL_VRAM Always pin shared GPU memory Device: cuda:0 NVIDIA GeForce RTX 3050 Ti Laptop GPU : cudaMallocAsync VAE dtype: torch.bfloat16 CUDA Stream Activated: True Using xformers cross attention ControlNet preprocessor location: C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor Civitai Helper: Get Custom Model Folder Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.3.0, num models: 15 Loading weights [15012c538f] from C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors 2024-03-14 13:51:50,712 - ControlNet - INFO - ControlNet UI callback registered. Civitai Helper: Set Proxy: model_type EPS UNet ADM Dimension 0 Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Startup time: 23.1s (prepare environment: 6.1s, import torch: 6.1s, import gradio: 1.2s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.8s, load scripts: 4.9s, create ui: 1.0s, gradio launch: 0.9s, add APIs: 0.8s). Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} To load target model SD1ClipModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 3289.9630851745605 [Memory Management] Model Memory (MB) = 454.2076225280762 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 1811.7554626464844 Moving model(s) has taken 0.09 seconds Model loaded in 6.0s (load weights from disk: 0.6s, forge load real models: 4.7s, calculate empty prompt: 0.7s). [LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\ARWGladiatorArmor.safetensors for BaseModel-UNet with 192 keys at weight 0.56 (skipped 0 keys) [LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\ARWGladiatorArmor.safetensors for BaseModel-CLIP with 72 keys at weight 0.56 (skipped 0 keys) [LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\perfect hands_1.5.safetensors for BaseModel-UNet with 192 keys at weight 1.0 (skipped 0 keys) [LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\perfect hands_1.5.safetensors for BaseModel-CLIP with 72 keys at weight 1.0 (skipped 0 keys) [LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\ARWTheColosseum.safetensors for BaseModel-UNet with 192 keys at weight 0.45 (skipped 0 keys) [LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\ARWTheColosseum.safetensors for BaseModel-CLIP with 72 keys at weight 0.45 (skipped 0 keys) To load target model SD1ClipModel Begin to load 1 model Reuse 1 loaded models [Memory Management] Current Free GPU Memory (MB) = 2950.054941177368 [Memory Management] Model Memory (MB) = 0.0 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 1926.0549411773682 Moving model(s) has taken 0.19 seconds To load target model BaseModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 3248.533639907837 [Memory Management] Model Memory (MB) = 1639.4137649536133 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 585.1198749542236 Moving model(s) has taken 0.74 seconds 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:13<00:00, 1.82it/s] To load target model AutoencoderKL████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:12<00:00, 1.91it/s] Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 3229.2027683258057 [Memory Management] Model Memory (MB) = 159.55708122253418 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 2045.6456871032715 Moving model(s) has taken 0.52 seconds *** Error running postprocess_image: C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\adetailer\scripts\!adetailer.py Traceback (most recent call last): File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\scripts.py", line 883, in postprocess_image script.postprocess_image(p, pp, *script_args) File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\adetailer\adetailer\traceback.py", line 159, in wrapper raise error from None NotImplementedError: ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ System info │ │ ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ ┃ Value ┃ │ │ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ Platform │ Windows-10-10.0.22631-SP0 │ │ │ │ Python │ 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] │ │ │ │ Version │ f0.0.17v1.8.0rc-latest-276-g29be1da7 │ │ │ │ Commit │ 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 │ │ │ │ Commandline │ ['launch.py', '--always-normal-vram', '--api', '--xformers', '--pin-shared-memory', '--cuda-malloc', '--cuda-stream', '--ckpt-dir', 'C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Stable-diffusion', │ │ │ │ │ '--hypernetwork-dir', 'C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/hypernetworks', '--embeddings-dir', 'C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/embeddings', '--lora-dir', │ │ │ │ │ 'C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Lora'] │ │ │ │ Libraries │ {'torch': '2.1.2+cu121', 'torchvision': '0.16.2', 'ultralytics': '8.1.27', 'mediapipe': '0.10.10'} │ │ │ └─────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ │ │ Inputs │ │ ┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ ┃ Value ┃ │ │ ┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ prompt │ full body picture, (1man, black hairs:1.2), gladiator armor, (gladiator armor and full clothes, gladiator short:1.1), (chasing, running, charging forward:1.2), <lora:ARWGladiatorArmor:0.56>, (fighting, holding sword:1.1), yelling with a │ │ │ │ │ furious face, (rage in eyes), (attack posture:1.1), (masterpiece:1.1), <lora:perfect hands:1> │ │ │ │ │ BREAK in the background a plane sandy colosseum, in roman empire, <lora:ARWTheColosseum:0.45> │ │ │ │ negative_prompt │ ((nude)) ,(NSFW),deformed, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, disgusting, poorly drawn hands, missing limb, floating limbs, disconnected limbs, malformed hands, blurry, ((((mutated hands and │ │ │ │ │ fingers)))), watermark, watermarked, oversaturated, censored, distorted hands, amputation, missing hands, obese, doubled face, double hands, b&w, black and white, sepia, flowers, roses, (worst quality, low quality, normal quality), By │ │ │ │ │ bad artist -neg, easynegative, FastNegativeV2, shirtless, chest nudity, without chest armor, open clothes, soft clothes, thin clothes, too little clothes, thighs outside │ │ │ │ n_iter │ 1 │ │ │ │ batch_size │ 1 │ │ │ │ width │ 683 │ │ │ │ height │ 1024 │ │ │ │ sampler_name │ DPM++ 2M Karras │ │ │ │ enable_hr │ False │ │ │ │ hr_upscaler │ Latent │ │ │ │ checkpoint │ realisticVisionV51_v51VAE.safetensors [15012c538f] │ │ │ │ vae │ Automatic │ │ │ │ unet │ Automatic │ │ │ └─────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ │ │ ADetailer │ │ ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ │ │ ┃ ┃ Value ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ │ │ version │ 24.3.0 │ │ │ │ ad_model │ face_yolov8n.pt │ │ │ │ ad_prompt │ │ │ │ │ ad_negative_prompt │ │ │ │ │ ad_controlnet_model │ None │ │ │ │ is_api │ False │ │ │ └─────────────────────┴─────────────────┘ │ │ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\adetailer │ │ │ │ \adetailer\traceback.py:139 in wrapper │ │ │ │ │ │ │ │ 138 │ │ try: │ │ │ │ ❱ 139 │ │ │ return func(*args, **kwargs) │ │ │ │ 140 │ │ except Exception as e: │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\adetailer │ │ │ │ \scripts\!adetailer.py:778 in postprocess_image │ │ │ │ │ │ │ │ 777 │ │ │ │ │ continue │ │ │ │ ❱ 778 │ │ │ │ is_processed |= self._postprocess_image_inner(p, pp, args, n=n) │ │ │ │ 779 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\adetailer │ │ │ │ \scripts\!adetailer.py:701 in _postprocess_image_inner │ │ │ │ │ │ │ │ 700 │ │ with change_torch_load(): │ │ │ │ ❱ 701 │ │ │ pred = predictor(ad_model, pp.image, args.ad_confidence, **kwargs) │ │ │ │ 702 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\adetailer │ │ │ │ \adetailer\ultralytics.py:29 in ultralytics_predict │ │ │ │ │ │ │ │ 28 │ apply_classes(model, model_path, classes) │ │ │ │ ❱ 29 │ pred = model(image, conf=confidence, device=device) │ │ │ │ 30 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\ultralytics\engine\model.py:169 in __call__ │ │ │ │ │ │ │ │ 168 │ │ """ │ │ │ │ ❱ 169 │ │ return self.predict(source, stream, **kwargs) │ │ │ │ 170 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\ultralytics\engine\model.py:439 in predict │ │ │ │ │ │ │ │ 438 │ │ │ self.predictor.set_prompts(prompts) │ │ │ │ ❱ 439 │ │ return self.predictor.predict_cli(source=source) if is_cli else self.predictor(s │ │ │ │ 440 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\ultralytics\engine\predictor.py:168 in __call__ │ │ │ │ │ │ │ │ 167 │ │ else: │ │ │ │ ❱ 168 │ │ │ return list(self.stream_inference(source, model, *args, **kwargs)) # merge │ │ │ │ 169 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\ultralytics\engine\predictor.py:255 in stream_inference │ │ │ │ │ │ │ │ 254 │ │ │ │ with profilers[2]: │ │ │ │ ❱ 255 │ │ │ │ │ self.results = self.postprocess(preds, im, im0s) │ │ │ │ 256 │ │ │ │ self.run_callbacks("on_predict_postprocess_end") │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\ultralytics\models\yolo\detect\predict.py:25 in postprocess │ │ │ │ │ │ │ │ 24 │ │ """Post-processes predictions and returns a list of Results objects.""" │ │ │ │ ❱ 25 │ │ preds = ops.non_max_suppression( │ │ │ │ 26 │ │ │ preds, │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\ultralytics\utils\ops.py:282 in non_max_suppression │ │ │ │ │ │ │ │ 281 │ │ │ boxes = x[:, :4] + c # boxes (offset by class) │ │ │ │ ❱ 282 │ │ │ i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS │ │ │ │ 283 │ │ i = i[:max_det] # limit detections │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\torchvision\ops\boxes.py:41 in nms │ │ │ │ │ │ │ │ 40 │ _assert_has_ops() │ │ │ │ ❱ 41 │ return torch.ops.torchvision.nms(boxes, scores, iou_threshold) │ │ │ │ 42 │ │ │ │ │ │ │ │ C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python\lib\site-pac │ │ │ │ kages\torch\_ops.py:692 in __call__ │ │ │ │ │ │ │ │ 691 │ │ # OpOverloadPacket to access it here. │ │ │ │ ❱ 692 │ │ return self._op(*args, **kwargs or {}) │ │ │ │ 693 │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a │ │ Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, │ │ Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, │ │ VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher]. │ │ │ │ CPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] │ │ QuantizedCPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] │ │ BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] │ │ Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback] │ │ FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] │ │ Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback] │ │ Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] │ │ Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] │ │ Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] │ │ ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] │ │ ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] │ │ AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback] │ │ AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback] │ │ AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback] │ │ AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback] │ │ AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback] │ │ AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback] │ │ AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback] │ │ AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback] │ │ AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback] │ │ Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:296 [backend fallback] │ │ AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback] │ │ AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback] │ │ FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:710 [backend fallback] │ │ FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] │ │ Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] │ │ VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] │ │ FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] │ │ PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback] │ │ FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] │ │ PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback] │ │ PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback] │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ --- Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:15<00:00, 1.64it/s] Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:15<00:00, 1.91it/s]
i found out i had 'torch': '2.1.2+cu121' and 'torchvision': '0.16.2'
'torch': '2.1.2+cu121'
'torchvision': '0.16.2'
i had to install 'torchvision': '0.16.2+cu121'
'torchvision': '0.16.2+cu121'
i guess whatever torch version you have, torchvision have to follow..
Describe the bug
i tried the main and dev versions both throw that same error (#517 ?)
Screenshots
No response
Console logs, from start to end.
List of installed extensions
No response