fenneishi / Fooocus-ControlNet-SDXL

add more control to fooocus
GNU General Public License v3.0
222 stars 9 forks source link

Model file is always corrupted and re-downloaded #22

Open sakibulalam opened 6 months ago

sakibulalam commented 6 months ago

Describe the problem When using Depth Image prompt, the model file ZoeD_M12_N.pt is always corrupted and gets redownloaded. I checked the checksum of the file with the checksum on huggingface and it matches.

c97f94c4d53c5b788af46c5da0462262aebb37ea116fd70014bcbba93146c33b  ZoeD_M12_N.pt

Full Console Log

WARNING: Running on CPU. This will be slow. Check your CUDA installation.
img_size [384, 512]
/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:3527.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Params passed to Resize transform:
    width:  512
    height:  384
    resize_target:  True
    keep_aspect_ratio:  True
    ensure_multiple_of:  32
    resize_method:  minimal
Traceback (most recent call last):
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/patch.py", line 497, in loader
    result = original_loader(*args, **kwargs)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load
    return _load(opened_zipfile,
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load
    result = unpickler.load()
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1392, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1366, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 381, in default_restore_location
    result = fn(storage, location)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 274, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 258, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/async_worker.py", line 704, in worker
    handler(task)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/async_worker.py", line 250, in handler
    pipeline.refresh_controlnets(
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/default_pipeline.py", line 69, in refresh_controlnets
    cache_controlnet_preprocess = {get_1st_path(get_paths(ms)): cache_loader(l, ms) for l, ms in preprocess_loaders.items()}
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/default_pipeline.py", line 69, in <dictcomp>
    cache_controlnet_preprocess = {get_1st_path(get_paths(ms)): cache_loader(l, ms) for l, ms in preprocess_loaders.items()}
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/default_pipeline.py", line 57, in cache_loader
    return loader(path_1st if 1 == len(paths) else paths) if not path_1st in loaded_ControlNets else \
  File "/Users/testing/Fooocus-ControlNet-SDXL/fooocus_extras/controlnet_preprocess_model/ZeoDepth/__init__.py", line 26, in __init__
    model.load_state_dict(torch.load(model_path)['model'])
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/patch.py", line 513, in loader
    raise ValueError(exp)
ValueError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
File corrupted: /Users/testing/Fooocus-ControlNet-SDXL/models/controlnet/ZoeD_M12_N.pt
Fooocus has tried to move the corrupted file to /Users/testing/Fooocus-ControlNet-SDXL/models/controlnet/ZoeD_M12_N.pt.corrupted
You may try again now and Fooocus will download models again
alextoddslick commented 5 months ago

Have you found a solution for this?