invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.34k stars 2.4k forks source link

[bug]: RuntimeError: please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. #3442

Closed websepia closed 1 year ago

websepia commented 1 year ago

Is there an existing issue for this?

OS

macOS

GPU

cpu

VRAM

2GB

What version did you experience this issue on?

v2.3.5.post1

What happened?

This issue occurred in v2.3.5.post1, but it doesn't exist in InvokeAI, version 2.3.5.

Step to reproduce:

  1. invokeai --web

Screenshots

截屏2023-05-21 02 41 07

Additional context

git pull 警告:unknown value given to http.version: 'http/1.1' remote: Enumerating objects: 849, done. remote: Counting objects: 100% (847/847), done. remote: Compressing objects: 100% (325/325), done. remote: Total 849 (delta 556), reused 782 (delta 519), pack-reused 2 接收对象中: 100% (849/849), 444.87 KiB | 636.00 KiB/s, 完成. 处理 delta 中: 100% (556/556), 完成 48 个本地对象. 来自 https://github.com/invoke-ai/InvokeAI 84b801d8..ff0e79fa main -> origin/main 9ecca132..efabf250 Convert-Model-Endpoint -> origin/Convert-Model-Endpoint fea9a6bf..47b0d5a9 feat/controlnet-nodes -> origin/feat/controlnet-nodes

Contact Details

No response

websepia commented 1 year ago

I reverted back to 84b801d8, but this error still exists!

Steps:

  1. git reflog ff0e79fa (HEAD -> main, origin/main, origin/HEAD) HEAD@{21}: pull: Fast-forward 84b801d8 HEAD@{22}: clone: from https://github.com/invoke-ai/InvokeAI

  2. git checkout main

  3. git reset --hard 84b801d8

websepia commented 1 year ago

invokeai.init has a parameter --always_use_cpu, but it seems not working for my case.

InvokeAI initialization file

This is the InvokeAI initialization file, which contains command-line default values.

Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting

or renaming it and then running invokeai-configure again.

Place frequently-used startup commands here, one or more per line.

Examples:

--outdir=D:\data\images

--no-nsfw_checker

--web --host=0.0.0.0

--steps=20

-Ak_euler_a -C10.0

--outdir="/Users/xixili/AI/InvokeAI/outputs" --embedding_path="/Users/xixili/AI/InvokeAI/embeddings" --precision=auto --max_loaded_models=2 --no-nsfw_checker --xformers --ckpt_convert

--always_use_cpu --autoconvert "/Users/xixili/AI/stable-diffusion-webui/models"

websepia commented 1 year ago

As a workaround, in order to make it work, I add 'cpu' to /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py like below.


def load(
    f: FILE_LIKE,
    map_location: MAP_LOCATION = 'cpu',
    pickle_module: Any = None,
    *,
    weights_only: bool = False,
    **pickle_load_args: Any
) -> Any:
websepia commented 1 year ago

Another workaround is to add the code snippet to 786 line into model_manager: /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py


            if Globals.always_use_cpu is True:
                checkpoint = torch.load(model_path, map_location=torch.device('cpu'))
            else:
                checkpoint = torch.load(model_path)