continue-revolution / sd-webui-segment-anything

Segment Anything for Stable Diffusion WebUI
3.4k stars 205 forks source link

[Bug]: Tries to use CUDA even in CPU-only mode #133

Open mr-september opened 1 year ago

mr-september commented 1 year ago

Is there an existing issue for this?

Have you updated WebUI and this extension to the latest version?

Do you understand that you should read the 1st item of https://github.com/continue-revolution/sd-webui-segment-anything#faq if you cannot install GroundingDINO?

Do you understand that you should use the latest ControlNet extension and enable external control if you want SAM extension to control ControlNet?

Do you understand that you should read the 2nd item of https://github.com/continue-revolution/sd-webui-segment-anything#faq if you observe problems like AttributeError bool object has no attribute enabled and TypeError bool object is not subscriptable?

What happened?

Tries to use CUDA even in CPU-only mode

Steps to reproduce the problem

  1. Start SD webui
  2. Install Segment Anything, select CPU-only
  3. Click some points
  4. Click "Preview Segmentation"

What should have happened?

Do not try to use CUDA, and generate segmentations.

Commit where the problem happens

webui: Ishqqytiger/DirectML - ba780a8 extension: 780fc49

What browsers do you use to access the UI ?

No response

Command Line Arguments

-opt-sub-quad-attention --medvram --disable-nan-check --autolaunch --api --cors-allow-origins=http://127.0.0.1:3456 --no-half

Console logs

Traceback (most recent call last):
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
    result = await self.call_function(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "E:\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 313, in fill_tabs
    refresh()
  File "E:\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 330, in refresh
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "E:\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 330, in <listcomp>
    ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
  File "E:\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 121, in create_html
    for item in self.list_items():
  File "E:\stable-diffusion-webui-directml\modules\ui_extra_networks_checkpoints.py", line 17, in list_items
    for name, checkpoint in sd_models.checkpoints_list.items():
RuntimeError: dictionary changed size during iteration
Start SAM Processing
Initializing SAM to cpu
Traceback (most recent call last):
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
    result = await self.call_function(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "E:\stable-diffusion-webui-directml\extensions\sd-webui-segment-anything\scripts\sam.py", line 204, in sam_predict
    sam = init_sam_model(sam_model_name)
  File "E:\stable-diffusion-webui-directml\extensions\sd-webui-segment-anything\scripts\sam.py", line 129, in init_sam_model
    sam_model_cache[sam_model_name] = load_sam_model(sam_model_name)
  File "E:\stable-diffusion-webui-directml\extensions\sd-webui-segment-anything\scripts\sam.py", line 80, in load_sam_model
    sam = sam_model_registry[model_type](checkpoint=sam_checkpoint_path)
  File "E:\stable-diffusion-webui-directml\extensions\sd-webui-segment-anything\sam_hq\build_sam_hq.py", line 38, in build_sam_hq_vit_b
    return _build_sam_hq(
  File "E:\stable-diffusion-webui-directml\extensions\sd-webui-segment-anything\sam_hq\build_sam_hq.py", line 108, in _build_sam_hq
    state_dict = torch.load(f)
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
    result = unpickler.load()
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\pickle.py", line 1213, in load
    dispatch[key[0]](self)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\pickle.py", line 1254, in load_binpersid
    self.append(self.persistent_load(pid))
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
    result = fn(storage, location)
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "E:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Additional information

No response

druggedhippo commented 1 year ago

This is due to https://jdhao.github.io/2022/01/28/pytorch_model_load_error/

The fix for that specific error is to add map_location=torch.device('cpu') in build_sam_hq.py

def _load_sam_checkpoint(sam: Sam, checkpoint=None):
    sam.eval()
    if checkpoint is not None:
        with open(checkpoint, "rb") as f:
                state_dict = torch.load(f, map_location=torch.device('cpu'))
        info = sam.load_state_dict(state_dict, strict=False)
        print(info)
    for _, p in sam.named_parameters():
        p.requires_grad = False
    return sam

But if you are trying to run this on a non-nvidia card you'll probably still run into other issues...