vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.42k stars 393 forks source link

[Issue]: Conda env ignored on windows #3072

Closed StarShine1A closed 4 months ago

StarShine1A commented 4 months ago

Issue Description

Alright so on my windows machine, I use conda. I have two envs: base(python 1.13), SDML(python 1.10). Even after initiation (conda init) and conda activate SDML, Automatic still uses the Base installation's Python. I need it to use SDML cause I need to use directml, which can only be installed on python 1.10 and less.

Version Platform Description

Using automatic master branch, Windows 11, Firefox.

Relevant log output

(SDML) PS C:\Users\star\Desktop\automatic-master> .\webui.ps1 --debug --lowvram --use-directml
Using VENV: C:\Users\star\Desktop\automatic-master\venv
17:26:15-310997 INFO     Starting SD.Next
17:26:15-310997 INFO     Logger: file="C:\Users\star\Desktop\automatic-master\sdnext.log" level=DEBUG size=65
                         mode=create
17:26:15-310997 INFO     Python 3.11.5 on Windows
17:26:15-326625 WARNING  Not a git repository, all git operations are disabled
17:26:15-405145 INFO     Version: app=sd.next version=unknown
17:26:15-420788 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 142 Stepping 12, GenuineIntel system=Windows
                         release=Windows-10-10.0.22631-SP0 python=3.11.5
17:26:15-420788 DEBUG    Setting environment tuning
17:26:15-420788 DEBUG    HF cache folder: C:\Users\star\.cache\huggingface\hub
17:26:15-420788 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=True openvino=False
17:26:15-436396 DEBUG    Torch allowed: cuda=False rocm=False ipex=False diml=True openvino=False
17:26:15-436396 INFO     Using DirectML Backend
17:26:15-436396 DEBUG    Installing torch: torch-directml
17:26:15-436396 INFO     Startup: quick launch
17:26:15-436396 INFO     Verifying requirements
17:26:15-455038 INFO     Verifying packages
17:26:15-455038 DEBUG    Register paths
17:26:15-468044 ERROR    Required path not found:
                         path=C:\Users\star\Desktop\automatic-master\modules\k-diffusion\k_diffusion\sampling.py
                         item=k_diffusion
17:26:15-468044 INFO     Extensions: disabled=[]
17:26:15-468044 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg'] extensions-builtin
17:26:15-468044 INFO     Extensions: enabled=[] extensions
17:26:15-468044 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
17:26:15-483684 DEBUG    Starting module: <module 'webui' from 'C:\\Users\\star\\Desktop\\automatic-master\\webui.py'>
17:26:15-483684 INFO     Command line args: ['--debug', '--lowvram', '--use-directml'] lowvram=True use_directml=True
                         debug=True
17:26:15-483684 DEBUG    Env flags: []
17:26:24-000799 INFO     Load packages: {'torch': '2.2.2+cpu', 'diffusers': '0.27.0', 'gradio': '3.43.2'}
17:26:25-129755 ERROR    DirectML initialization failed: No module named 'torch_directml'
17:26:25-129755 ERROR    DirectML initialization failed: No module named 'torch_directml'
17:26:25-490742 DEBUG    Read: file="config.json" json=33 bytes=1462 time=0.000
17:26:25-490742 INFO     Engine: backend=Backend.DIFFUSERS compute=cpu device=cpu attention="Scaled-Dot-Product"
                         mode=no_grad
17:26:25-506376 INFO     Device:
17:26:25-506376 DEBUG    Read: file="html\reference.json" json=36 bytes=21248 time=0.000
17:26:26-478658 DEBUG    Importing LDM
17:26:26-509918 DEBUG    Entering start sequence
17:26:26-509918 DEBUG    Initializing
17:26:26-572420 INFO     Available VAEs: path="models\VAE" items=0
17:26:26-577927 INFO     Disabled extensions: ['sd-webui-controlnet']
17:26:26-577927 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=0 time=0.00
17:26:26-577927 DEBUG    Read: file="cache.json" json=1 bytes=194 time=0.000
17:26:26-588438 DEBUG    Read: file="metadata.json" json=2 bytes=1335 time=0.000
17:26:26-588438 INFO     Available models: path="models\Stable-diffusion" items=1 time=0.01
17:26:26-713850 DEBUG    Load extensions
17:26:26-823626 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m17:26:26-823626[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m0[0m
                         [33mfolders[0m=[1;36m2[0m
17:26:26-854878 DEBUG    Extensions init time: 0.14
17:26:26-878512 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2640 time=0.000
17:26:26-886519 DEBUG    Load upscalers: total=28 downloaded=0 user=0 time=0.03 ['None', 'Lanczos', 'Nearest', 'ESRGAN',
                         'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
17:26:26-902158 DEBUG    Load styles: folder="models\styles" items=288 time=0.02
17:26:26-902158 DEBUG    Creating UI
17:26:26-917778 INFO     UI theme: name="black-teal" style=Auto base=sdnext.css
17:26:26-933433 DEBUG    UI initialize: txt2img
17:26:26-980681 DEBUG    Extra networks: page='model' items=37 subfolders=2 tab=txt2img
                         folders=['models\\Stable-diffusion', 'models\\Diffusers', 'models\\Reference'] list=0.02
                         thumb=0.00 desc=0.00 info=0.00 workers=4
17:26:27-027564 DEBUG    Extra networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles',
                         'html'] list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4
17:26:27-027564 DEBUG    Extra networks: page='embedding' items=0 subfolders=0 tab=txt2img
                         folders=['models\\embeddings'] list=0.02 thumb=0.02 desc=0.00 info=0.00 workers=4
17:26:27-027564 DEBUG    Extra networks: page='hypernetwork' items=0 subfolders=0 tab=txt2img
                         folders=['models\\hypernetworks'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
17:26:27-043196 DEBUG    Extra networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4
17:26:27-043196 DEBUG    Extra networks: page='lora' items=0 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
17:26:27-262786 DEBUG    UI initialize: img2img
17:26:27-513993 DEBUG    UI initialize: control models=models\control
17:26:28-220345 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000
17:26:28-408645 DEBUG    Themes: builtin=12 gradio=5 huggingface=0
17:26:28-518438 INFO     Extension list is empty: refresh required
17:26:28-534055 DEBUG    Extension list: processed=7 installed=7 enabled=7 disabled=0 visible=7 hidden=0
17:26:28-675098 DEBUG    Root paths: ['C:\\Users\\star\\Desktop\\automatic-master']
17:26:28-769350 INFO     Local URL: http://127.0.0.1:7860/
17:26:28-769350 DEBUG    Gradio functions: registered=1473
17:26:28-769350 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
17:26:28-785417 DEBUG    Creating API
17:26:28-942095 DEBUG    Scripts setup: ['IP Adapters:0.031', 'AnimateDiff:0.016', 'Face:0.047', 'Outpainting:0.031']
17:26:28-957723 DEBUG    Model metadata: file="metadata.json" no changes
17:26:28-957723 DEBUG    Model requested: fn=<lambda>
17:26:28-957723 INFO     Select: model="stable-diffusion_v1.5-model [1a189f0be6]"
17:26:28-957723 DEBUG    Load model: existing=False
                         target=C:\Users\star\Desktop\automatic-master\models\Stable-diffusion\stable-diffusion_v1.5-mod
                         el.safetensors info=None
17:26:28-957723 INFO     Torch override dtype: no-half set
17:26:28-957723 INFO     Torch override VAE dtype: no-half set
17:26:28-957723 DEBUG    Desired Torch parameters: dtype=FP32 no-half=True no-half-vae=True upscast=False
17:26:28-973347 INFO     Setting Torch parameters: device=cpu dtype=torch.float32 vae=torch.float32 unet=torch.float32
                         context=no_grad fp16=None bf16=None optimization=Scaled-Dot-Product
17:26:28-973347 DEBUG    Diffusers loading:
                         path="C:\Users\star\Desktop\automatic-master\models\Stable-diffusion\stable-diffusion_v1.5-mode
                         l.safetensors"
17:26:28-973347 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
                         file="C:\Users\star\Desktop\automatic-master\models\Stable-diffusion\stable-diffusion_v1.5-mode
                         l.safetensors" size=7346MB
17:26:31-124346 DEBUG    Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True,
                         'torch_dtype': torch.float32, 'load_connected_pipeline': True, 'extract_ema': True,
                         'original_config_file': 'configs/v1-inference.yaml', 'use_safetensors': True}
17:26:31-124346 DEBUG    Setting model: enable sequential CPU offload
17:26:31-139967 ERROR    Failed to load diffusers model
17:26:31-139967 ERROR    loading Diffusers model: AssertionError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\Users\star\Desktop\automatic-master\modules\sd_models.py:1087 in load_diffuser                                    │
│                                                                                                                      │
│   1086 │   │                                                                                                         │
│ ❱ 1087 │   │   set_diffuser_options(sd_model, vae, op)                                                               │
│   1088                                                                                                               │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\modules\sd_models.py:698 in set_diffuser_options                              │
│                                                                                                                      │
│    697 │   │   │   │   shared.log.warning(f'Disabling {op} "Move model to CPU" since "Sequential CPU offload" is ena │
│ ❱  698 │   │   │   sd_model.enable_sequential_cpu_offload()                                                          │
│    699 │   │   │   sd_model.has_accelerate = True                                                                    │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py:1084 in enable_s │
│                                                                                                                      │
│   1083 │   │   │   │   offload_buffers = len(model._parameters) > 0                                                  │
│ ❱ 1084 │   │   │   │   cpu_offload(model, device, offload_buffers=offload_buffers)                                   │
│   1085                                                                                                               │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\accelerate\big_modeling.py:201 in cpu_offload          │
│                                                                                                                      │
│   200 │   add_hook_to_module(model, AlignDevicesHook(io_same_device=True), append=True)                              │
│ ❱ 201 │   attach_align_device_hook(                                                                                  │
│   202 │   │   model,                                                                                                 │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\accelerate\hooks.py:504 in attach_align_device_hook    │
│                                                                                                                      │
│   503 │   │   child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name                     │
│ ❱ 504 │   │   attach_align_device_hook(                                                                              │
│   505 │   │   │   child,                                                                                             │
│                                                                                                                      │
│                                               ... 1 frames hidden ...                                                │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\accelerate\hooks.py:495 in attach_align_device_hook    │
│                                                                                                                      │
│   494 │   │   )                                                                                                      │
│ ❱ 495 │   │   add_hook_to_module(module, hook, append=True)                                                          │
│   496                                                                                                                │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\accelerate\hooks.py:157 in add_hook_to_module          │
│                                                                                                                      │
│   156 │                                                                                                              │
│ ❱ 157 │   module = hook.init_hook(module)                                                                            │
│   158 │   module._hf_hook = hook                                                                                     │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\accelerate\hooks.py:304 in init_hook                   │
│                                                                                                                      │
│   303 │   │   │   │   for name, _ in module.named_buffers(recurse=self.place_submodules):                            │
│ ❱ 304 │   │   │   │   │   set_module_tensor_to_device(                                                               │
│   305 │   │   │   │   │   │   module, name, self.execution_device, tied_params_map=self.tied_params_map              │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\accelerate\utils\modeling.py:379 in set_module_tensor_ │
│                                                                                                                      │
│    378 │   │   if value is None:                                                                                     │
│ ❱  379 │   │   │   new_value = old_value.to(device)                                                                  │
│    380 │   │   │   if dtype is not None and device in ["meta", torch.device("meta")]:                                │
│                                                                                                                      │
│ C:\Users\star\Desktop\automatic-master\venv\Lib\site-packages\torch\cuda\__init__.py:293 in _lazy_init               │
│                                                                                                                      │
│    292 │   │   if not hasattr(torch._C, "_cuda_getDeviceCount"):                                                     │
│ ❱  293 │   │   │   raise AssertionError("Torch not compiled with CUDA enabled")                                      │
│    294 │   │   if _cudart is None:                                                                                   │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled
17:26:31-877194 INFO     Load embeddings: loaded=0 skipped=0 time=0.02
17:26:32-065513 DEBUG    GC: collected=0 device=cpu {'ram': {'used': 0.61, 'total': 7.84}} time=0.19
17:26:32-081140 INFO     Load model: time=2.92 load=2.92 native=512 {'ram': {'used': 0.61, 'total': 7.84}}
17:26:32-081140 INFO     Startup time: 16.55 torch=6.89 olive=0.09 gradio=1.46 libraries=2.48 samplers=0.06
                         extensions=0.14 face-restore=0.13 ui-en=0.39 ui-txt2img=0.19 ui-img2img=0.08 ui-control=0.13
                         ui-extras=0.31 ui-settings=0.47 ui-defaults=0.06 launch=0.17 api=0.16 checkpoint=3.14
17:26:32-089152 DEBUG    Save: file="config.json" json=33 bytes=1409 time=0.008

Backend

Diffusers

Branch

Master

Model

SD 1.5

Acknowledgements

vladmandic commented 4 months ago

sdnext default launcher creates and activates venv which is pointless and in conflict if you want to run via conda. so for manually created environments such as you describe, just skip using webui.bat/sh/ps1 and do it directly using python launch.py - nothing else changes (all command line flags are still valid).

BatmanofZuhandArrgh commented 3 months ago

oof this should be updated in the installation guide

sdnext default launcher creates and activates venv which is pointless and in conflict if you want to run via conda. so for manually created environments such as you describe, just skip using webui.bat/sh/ps1 and do it directly using python launch.py - nothing else changes (all command line flags are still valid).

vladmandic commented 3 months ago

I'll add it

BatmanofZuhandArrgh commented 3 months ago

Thank u!