Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-224-g90019688
Commit hash: 9001968898187e5baf83ecc3b9e44c6a6a1651a6
Launching Web UI with arguments:
Total VRAM 24564 MB, total RAM 65228 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
D:\AI Image\Forge\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: D:\AI Image\Forge\webui\models\ControlNetPreprocessor
2024-10-06 00:45:31,571 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'D:\AI Image\Forge\webui\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors', 'hash': 'a0f13c83'}, 'vae_filename': None, 'unet_storage_dtype': None}
Running on local URL: http://127.0.0.1:7860
Exception in thread Thread-18 (webui_worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "D:\AI Image\Forge\webui\webui.py", line 86, in webui_worker
app, local_url, share_url = shared.demo.launch(
File "D:\AI Image\Forge\system\python\lib\site-packages\gradio\blocks.py", line 2446, in launch
raise ValueError(
ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-224-g90019688 Commit hash: 9001968898187e5baf83ecc3b9e44c6a6a1651a6 Launching Web UI with arguments: Total VRAM 24564 MB, total RAM 65228 MB pytorch version: 2.3.1+cu121 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4090 : native Hint: your device supports --cuda-malloc for potential speed improvements. VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16 CUDA Using Stream: False D:\AI Image\Forge\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead. warnings.warn( Using pytorch cross attention Using pytorch attention for VAE ControlNet preprocessor location: D:\AI Image\Forge\webui\models\ControlNetPreprocessor 2024-10-06 00:45:31,571 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': 'D:\AI Image\Forge\webui\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors', 'hash': 'a0f13c83'}, 'vae_filename': None, 'unet_storage_dtype': None} Running on local URL: http://127.0.0.1:7860 Exception in thread Thread-18 (webui_worker): Traceback (most recent call last): File "threading.py", line 1016, in _bootstrap_inner File "threading.py", line 953, in run File "D:\AI Image\Forge\webui\webui.py", line 86, in webui_worker app, local_url, share_url = shared.demo.launch( File "D:\AI Image\Forge\system\python\lib\site-packages\gradio\blocks.py", line 2446, in launch raise ValueError( ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.