Closed TimmekHW closed 2 months ago
Как я это пофиксил (я использую https://github.com/lllyasviel/stable-diffusion-webui-forge):
set DIR=%~dp0system
set PATH=%DIR%\git\bin;%DIR%\python;%DIR%\python\Scripts;%PATH% set PY_LIBS=%DIR%\python\Scripts\Lib;%DIR%\python\Scripts\Lib\site-packages set PY_PIP=%DIR%\python\Scripts set SKIP_VENV=1 set PIP_INSTALLER_LOCATION=%DIR%\python\get-pip.py set TRANSFORMERS_CACHE=%DIR%\transformers-cache
cmd /k
4. Открыл venv.bat
И уже после этого ввёл нужные команды:
5. pip install --upgrade nvidia-cudnn-cu12
6. pip install --upgrade tensorrt
7. pip install --upgrade optimum-nvidia
И это помогло мне.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-218-g643a485d Commit hash: 643a485d1aff11acc657b24ee32d019e28d85b07 removing old version of tensorrt Launching Web UI with arguments: Total VRAM 24564 MB, total RAM 65322 MB pytorch version: 2.4.0+cu124 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4090 : native Hint: your device supports --cuda-malloc for potential speed improvements. VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16 CUDA Using Stream: False G:\webui_forge_cu124_torch24\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 13, in
import ui_trt
File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py", line 18, in
from exporter import export_onnx, export_trt, export_lora
File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\exporter.py", line 23, in
from utilities import Engine
File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\utilities.py", line 32, in
import tensorrt as trt
File "G:\webui_forge_cu124_torch24\system\python\lib\site-packages\tensorrt__init__.py", line 18, in
from tensorrt_bindings import
ModuleNotFoundError: No module named 'tensorrt_bindings'
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead. warnings.warn( Using pytorch cross attention Using pytorch attention for VAE ControlNet preprocessor location: G:\webui_forge_cu124_torch24\webui\models\ControlNetPreprocessor ** Error loading script: trt.py Traceback (most recent call last): File "G:\webui_forge_cu124_torch24\webui\modules\scripts.py", line 525, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "G:\webui_forge_cu124_torch24\webui\modules\script_loading.py", line 13, in load_module module_spec.loader.exec_module(module) File "2024-08-27 18:04:09,451 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': 'G:\webui_forge_cu124_torch24\webui\models\Stable-diffusion\flux_dev.safetensors', 'hash': '4af4416b'}, 'vae_filename': None, 'unet_storage_dtype': None} Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
. Startup time: 9.6s (prepare environment: 2.1s, launcher: 1.3s, import torch: 2.6s, initialize shared: 0.1s, other imports: 0.5s, load scripts: 1.1s, create ui: 1.2s, gradio launch: 0.6s). Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}