lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 175 forks source link

[Bug]: [ModuleNotFoundError]: No module named 'models.blip' (with temp fix) #389

Open stduhpf opened 4 months ago

stduhpf commented 4 months ago

Checklist

What happened?

When trying to get captions using clip (in img2img tab), it returns "" and a ModuleNotFoundError at line 92 in processing.py appears in the logs.

Steps to reproduce the problem

  1. go to img2img
  2. put an image
  3. click "interrogate clip"

What should have happened?

It should generate captions

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

here is the sysinfo { "Platform": "Windows-10-10.0.19045-SP0", "Python": "3.10.11", "Version": "v1.7.0-343-g1ed25430", "Commit": "1ed25430486ba97f24a3dd9469fc6cc6b188789f", "Script path": "C:\\stable-diffusion-webui-directml", "Data path": "C:\\stable-diffusion-webui-directml", "Extensions dir": "C:\\stable-diffusion-webui-directml\\extensions", "Checksum": "9221071bbcd50ad5ea0604159cb4388ec7001f1cec4ca53c7a36fb3b1237e8dc", "Commandline": [ "launch.py", "--lowvram", "--opt-sub-quad-attention", "--opt-split-attention-v1", "--listen", "--api", "--skip-install", "--enable-insecure-extension-access", "--use-directml", "--no-gradio-queue" ], "Torch env info": { "torch_version": "2.0.0+cpu", "is_debug_build": "False", "cuda_compiled_version": null, "gcc_version": "(x86_64-posix-seh, Built by strawberryperl.com project) 8.3.0\r", "clang_version": null, "cmake_version": "version 3.26.0", "os": "Microsoft Windows 10 Famille", "libc_version": "N/A", "python_version": "3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)", "python_platform": "Windows-10-10.0.19045-SP0", "is_cuda_available": "False", "cuda_runtime_version": null, "cuda_module_loading": "N/A", "nvidia_driver_version": null, "nvidia_gpu_models": null, "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "lion-pytorch==0.1.2", "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "pytorch_optimizer==2.12.0", "torch==2.0.0", "torch-directml==0.2.0.dev230426", "torchdiffeq==0.2.3", "torchmetrics==0.10.3", "torchsde==0.2.6", "torchvision==0.15.1" ], "conda_packages": "", "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture=9", "CurrentClockSpeed=3701", "DeviceID=CPU0", "Family=107", "L2CacheSize=6144", "L2CacheSpeed=", "Manufacturer=AuthenticAMD", "MaxClockSpeed=3701", "Name=AMD Ryzen 9 5900X 12-Core Processor ", "ProcessorType=3", "Revision=8450" ] }, "Exceptions": [ ... ] }

Console logs

[2024-02-15 21:45:22,366][INFO][modules.shared_state] - Starting job interrogate
*** Error interrogating
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui-directml\modules\interrogate.py", line 192, in interrogate
        self.load()
      File "C:\stable-diffusion-webui-directml\modules\interrogate.py", line 121, in load
        self.blip_model = self.load_blip_model()
      File "C:\stable-diffusion-webui-directml\modules\interrogate.py", line 92, in load_blip_model
        import models.blip
    ModuleNotFoundError: No module named 'models.blip'

---
[2024-02-15 21:45:22,506][INFO][modules.shared_state] - Ending job interrogate (0.14 seconds)

Additional information

Basically, python can't locate the module models.blip.

I fixed it by adding

import sys
sys.path.append('./repositories/BLIP') 

in launch.py.

I'm not sure if this is the proper way of fixing it, but it worked for me.

stduhpf commented 4 months ago

@lshqqytiger It's weird that you can't reproduce it, it happens consistently on my end, even with a fresh clone of this repo, using a new python venv. 🤷‍♂️

lshqqytiger commented 4 months ago

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7765 I don't know why this issue still exists even on upstream.

stduhpf commented 4 months ago

Oh I didn't check for older issues on upstream. I didn't see it in the most recent issues so I assumed it was from here. It should be an easy fix too.