invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.69k stars 2.43k forks source link

[bug]: Docker build doesn't work properly #6468

Open SergKlein opened 5 months ago

SergKlein commented 5 months ago

Is there an existing issue for this problem?

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

RTX4090

GPU VRAM

No response

Version number

4.2.2post1

Browser

Chrome

Python dependencies

No response

What happened

=> ERROR [builder 6/8] COPY invokeai ./invokeai                           0.0s
 => ERROR [builder 7/8] COPY pyproject.toml ./                             0.0s
 => CACHED [builder 8/8] RUN --mount=type=cache,target=/root/.cache/pip    0.0s
 => CACHED [runtime  3/11] COPY --link --from=builder /opt/invokeai /opt/  0.0s
 => CACHED [runtime  4/11] COPY --link --from=builder /opt/venv/invokeai   0.0s
 => CACHED [web-builder 2/6] RUN corepack enable                           0.0s
 => CACHED [web-builder 3/6] WORKDIR /build                                0.0s
 => CACHED [web-builder 4/6] COPY invokeai/frontend/web/ ./                0.0s
 => CACHED [web-builder 5/6] RUN --mount=type=cache,target=/pnpm/store     0.0s
 => CACHED [web-builder 6/6] RUN npx vite build                            0.0s
 => CACHED [runtime  5/11] COPY --link --from=web-builder /build/dist /op  0.0s
 => CACHED [runtime  6/11] RUN mkdir -p "/opt/amdgpu/share/libdrm" &&  ln  0.0s
 => CACHED [runtime  7/11] WORKDIR /opt/invokeai                           0.0s
 => CACHED [runtime  8/11] RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfi  0.0s
 => CACHED [runtime  9/11] RUN python3 -c "from patchmatch import patch_m  0.0s
 => CACHED [runtime 10/11] RUN mkdir -p /invokeai && chown -R 1000:1000 /  0.0s
 => ERROR [runtime 11/11] COPY docker/docker-entrypoint.sh ./              0.0s
------
 > [builder 6/8] COPY invokeai ./invokeai:
------
------
 > [builder 7/8] COPY pyproject.toml ./:
------
------
 > [runtime 11/11] COPY docker/docker-entrypoint.sh ./:
------
Dockerfile:129
--------------------
 127 |     RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
 128 |     
 129 | >>> COPY docker/docker-entrypoint.sh ./
 130 |     ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]
 131 |     CMD ["invokeai-web"]
--------------------
ERROR: failed to solve: failed to compute cache key: failed to calculate checksum of ref f82caab4-0a61-4e29-b021-62998b74903b::x88yjraig1bau0zj2l4zopwjz: "/docker/docker-entrypoint.sh": not found

What you expected to happen

Ready to use image

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

ebr commented 5 months ago

Unable to reproduce on either 4.2.2post1 or main image

Please try again and post complete output of your terminal

SergKlein commented 5 months ago
 docker % docker compose up
[+] Running 1/1
 ! invokeai-nvidia Warning                                                 1.6s 
[+] Building 40.2s (24/33)                                 docker:desktop-linux
 => [invokeai-nvidia internal] load .dockerignore                          0.0s
 => => transferring context: 151B                                          0.0s
 => [invokeai-nvidia internal] load build definition from Dockerfile       0.0s
 => => transferring dockerfile: 4.08kB                                     0.0s
 => [invokeai-nvidia] resolve image config for docker.io/docker/dockerfil  1.5s
 => [invokeai-nvidia auth] docker/dockerfile:pull token for registry-1.do  0.0s
 => CACHED [invokeai-nvidia] docker-image://docker.io/docker/dockerfile:1  0.0s
 => [invokeai-nvidia internal] load metadata for docker.io/library/ubuntu  1.1s
 => [invokeai-nvidia internal] load metadata for docker.io/library/node:2  1.3s
 => [invokeai-nvidia auth] library/ubuntu:pull token for registry-1.docke  0.0s
 => [invokeai-nvidia auth] library/node:pull token for registry-1.docker.  0.0s
 => [invokeai-nvidia internal] load build context                          0.0s
 => => transferring context: 150.99kB                                      0.0s
 => [invokeai-nvidia builder 1/7] FROM docker.io/library/ubuntu:23.04@sha  0.0s
 => [invokeai-nvidia web-builder 1/6] FROM docker.io/library/node:20-slim  0.0s
 => CACHED [invokeai-nvidia runtime  2/11] RUN apt update && apt install   0.0s
 => CACHED [invokeai-nvidia web-builder 2/6] RUN corepack enable           0.0s
 => CACHED [invokeai-nvidia web-builder 3/6] WORKDIR /build                0.0s
 => CACHED [invokeai-nvidia web-builder 4/6] COPY invokeai/frontend/web/   0.0s
 => CACHED [invokeai-nvidia web-builder 5/6] RUN --mount=type=cache,targe  0.0s
 => CACHED [invokeai-nvidia web-builder 6/6] RUN npx vite build            0.0s
 => CACHED [invokeai-nvidia builder 2/7] RUN rm -f /etc/apt/apt.conf.d/do  0.0s
 => CACHED [invokeai-nvidia builder 3/7] RUN --mount=type=cache,target=/v  0.0s
 => CACHED [invokeai-nvidia builder 4/7] WORKDIR /opt/invokeai             0.0s
 => CACHED [invokeai-nvidia builder 5/7] COPY invokeai ./invokeai          0.0s
 => CACHED [invokeai-nvidia builder 6/7] COPY pyproject.toml ./            0.0s
 => ERROR [invokeai-nvidia builder 7/7] RUN --mount=type=cache,target=/r  37.2s
------                                                                          
 > [invokeai-nvidia builder 7/7] RUN --mount=type=cache,target=/root/.cache/pip     python3 -m venv /opt/venv/invokeai &&    if [ "linux/amd64" = "linux/arm64" ] || [ "nvidia" = "cpu" ]; then         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu";     elif [ "nvidia" = "rocm" ]; then         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6";     else         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121";     fi &&    if [ "nvidia" = "cuda" ] && [ "linux/amd64" = "linux/amd64" ]; then         pip install $extra_index_url_arg -e ".[xformers]";     else         pip install $extra_index_url_arg -e ".";     fi:
1.571 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
1.571 Obtaining file:///opt/invokeai
1.578   Installing build dependencies: started
4.460   Installing build dependencies: finished with status 'done'
4.461   Checking if build backend supports build_editable: started
4.542   Checking if build backend supports build_editable: finished with status 'done'
4.543   Getting requirements to build editable: started
4.697   Getting requirements to build editable: finished with status 'done'
4.698   Preparing editable metadata (pyproject.toml): started
4.852   Preparing editable metadata (pyproject.toml): finished with status 'done'
5.485 Collecting accelerate==0.30.1
5.489   Using cached accelerate-0.30.1-py3-none-any.whl (302 kB)
5.916 Collecting clip-anytorch==2.6.0
5.924   Using cached clip_anytorch-2.6.0-py3-none-any.whl (1.4 MB)
6.374 Collecting compel==2.0.2
6.377   Using cached compel-2.0.2-py3-none-any.whl (30 kB)
6.862 Collecting controlnet-aux==0.0.7
6.865   Using cached controlnet_aux-0.0.7.tar.gz (202 kB)
6.908   Preparing metadata (setup.py): started
7.008   Preparing metadata (setup.py): finished with status 'done'
7.448 Collecting diffusers[torch]==0.27.2
7.459   Using cached diffusers-0.27.2-py3-none-any.whl (2.0 MB)
7.928 Collecting invisible-watermark==0.2.0
7.939   Using cached invisible_watermark-0.2.0-py3-none-any.whl (1.6 MB)
8.411 Collecting mediapipe==0.10.7
8.458   Using cached mediapipe-0.10.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (32.5 MB)
9.229 Collecting numpy==1.26.4
9.246   Using cached numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
9.758 Collecting onnx==1.15.0
9.770   Using cached onnx-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (15.6 MB)
10.35 Collecting onnxruntime==1.16.3
10.36   Using cached onnxruntime-1.16.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.8 MB)
10.95 Collecting opencv-python==4.9.0.80
11.04   Using cached opencv_python-4.9.0.80-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (41.3 MB)
11.67 Collecting pytorch-lightning==2.1.3
11.68   Using cached pytorch_lightning-2.1.3-py3-none-any.whl (777 kB)
12.28 Collecting safetensors==0.4.3
12.29   Using cached safetensors-0.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.2 MB)
12.81 Collecting timm==0.6.13
12.82   Using cached timm-0.6.13-py3-none-any.whl (549 kB)
13.49 Collecting torch==2.2.2
13.62   Using cached torch-2.2.2-cp311-cp311-manylinux2014_aarch64.whl (86.6 MB)
14.24 Collecting torchmetrics==0.11.4
14.24   Using cached torchmetrics-0.11.4-py3-none-any.whl (519 kB)
14.72 Collecting torchsde==0.2.6
14.73   Using cached torchsde-0.2.6-py3-none-any.whl (61 kB)
15.41 Collecting torchvision==0.17.2
15.42   Using cached torchvision-0.17.2-cp311-cp311-manylinux2014_aarch64.whl (14.0 MB)
15.95 Collecting transformers==4.41.1
15.96   Using cached transformers-4.41.1-py3-none-any.whl (9.1 MB)
16.50 Collecting fastapi-events==0.11.0
16.51   Using cached fastapi_events-0.11.0-py3-none-any.whl (28 kB)
17.07 Collecting fastapi==0.111.0
17.07   Using cached fastapi-0.111.0-py3-none-any.whl (91 kB)
17.58 Collecting huggingface-hub==0.23.1
17.59   Using cached huggingface_hub-0.23.1-py3-none-any.whl (401 kB)
18.11 Collecting pydantic-settings==2.2.1
18.11   Using cached pydantic_settings-2.2.1-py3-none-any.whl (13 kB)
18.72 Collecting pydantic==2.7.2
18.72   Using cached pydantic-2.7.2-py3-none-any.whl (409 kB)
19.20 Collecting python-socketio==5.11.1
19.20   Using cached python_socketio-5.11.1-py3-none-any.whl (75 kB)
19.78 Collecting uvicorn[standard]==0.28.0
19.78   Using cached uvicorn-0.28.0-py3-none-any.whl (60 kB)
20.32 Collecting albumentations
20.32   Using cached albumentations-1.4.8-py3-none-any.whl (156 kB)
20.87 Collecting blake3
20.87   Using cached blake3-0.4.1.tar.gz (117 kB)
20.89   Installing build dependencies: started
37.06   Installing build dependencies: finished with status 'done'
37.06   Getting requirements to build wheel: started
37.08   Getting requirements to build wheel: finished with status 'done'
37.08   Preparing metadata (pyproject.toml): started
37.10   Preparing metadata (pyproject.toml): finished with status 'error'
37.10   error: subprocess-exited-with-error
37.10   
37.10   × Preparing metadata (pyproject.toml) did not run successfully.
37.10   │ exit code: 1
37.10   ╰─> [6 lines of output]
37.10       
37.10       Cargo, the Rust package manager, is not installed or is not on PATH.
37.10       This package requires Rust and Cargo to compile extensions. Install it through
37.10       the system's package manager or via https://rustup.rs/
37.10       
37.10       Checking for Rust toolchain....
37.10       [end of output]
37.10   
37.10   note: This error originates from a subprocess, and is likely not a problem with pip.
37.10 error: metadata-generation-failed
37.10 
37.10 × Encountered error while generating package metadata.
37.10 ╰─> See above for output.
37.10 
37.10 note: This is an issue with the package mentioned above, not pip.
37.10 hint: See above for details.
------
failed to solve: process "/bin/sh -c python3 -m venv ${VIRTUAL_ENV} &&    if [ \"$TARGETPLATFORM\" = \"linux/arm64\" ] || [ \"$GPU_DRIVER\" = \"cpu\" ]; then         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/cpu\";     elif [ \"$GPU_DRIVER\" = \"rocm\" ]; then         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/rocm5.6\";     else         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/cu121\";     fi &&    if [ \"$GPU_DRIVER\" = \"cuda\" ] && [ \"$TARGETPLATFORM\" = \"linux/amd64\" ]; then         pip install $extra_index_url_arg -e \".[xformers]\";     else         pip install $extra_index_url_arg -e \".\";     fi" did not complete successfully: exit code: 1
SergKlein commented 5 months ago
 docker % docker compose up
[+] Running 1/1
 ! invokeai-nvidia Warning                                                 1.6s 
[+] Building 40.2s (24/33)                                 docker:desktop-linux
 => [invokeai-nvidia internal] load .dockerignore                          0.0s
 => => transferring context: 151B                                          0.0s
 => [invokeai-nvidia internal] load build definition from Dockerfile       0.0s
 => => transferring dockerfile: 4.08kB                                     0.0s
 => [invokeai-nvidia] resolve image config for docker.io/docker/dockerfil  1.5s
 => [invokeai-nvidia auth] docker/dockerfile:pull token for registry-1.do  0.0s
 => CACHED [invokeai-nvidia] docker-image://docker.io/docker/dockerfile:1  0.0s
 => [invokeai-nvidia internal] load metadata for docker.io/library/ubuntu  1.1s
 => [invokeai-nvidia internal] load metadata for docker.io/library/node:2  1.3s
 => [invokeai-nvidia auth] library/ubuntu:pull token for registry-1.docke  0.0s
 => [invokeai-nvidia auth] library/node:pull token for registry-1.docker.  0.0s
 => [invokeai-nvidia internal] load build context                          0.0s
 => => transferring context: 150.99kB                                      0.0s
 => [invokeai-nvidia builder 1/7] FROM docker.io/library/ubuntu:23.04@sha  0.0s
 => [invokeai-nvidia web-builder 1/6] FROM docker.io/library/node:20-slim  0.0s
 => CACHED [invokeai-nvidia runtime  2/11] RUN apt update && apt install   0.0s
 => CACHED [invokeai-nvidia web-builder 2/6] RUN corepack enable           0.0s
 => CACHED [invokeai-nvidia web-builder 3/6] WORKDIR /build                0.0s
 => CACHED [invokeai-nvidia web-builder 4/6] COPY invokeai/frontend/web/   0.0s
 => CACHED [invokeai-nvidia web-builder 5/6] RUN --mount=type=cache,targe  0.0s
 => CACHED [invokeai-nvidia web-builder 6/6] RUN npx vite build            0.0s
 => CACHED [invokeai-nvidia builder 2/7] RUN rm -f /etc/apt/apt.conf.d/do  0.0s
 => CACHED [invokeai-nvidia builder 3/7] RUN --mount=type=cache,target=/v  0.0s
 => CACHED [invokeai-nvidia builder 4/7] WORKDIR /opt/invokeai             0.0s
 => CACHED [invokeai-nvidia builder 5/7] COPY invokeai ./invokeai          0.0s
 => CACHED [invokeai-nvidia builder 6/7] COPY pyproject.toml ./            0.0s
 => ERROR [invokeai-nvidia builder 7/7] RUN --mount=type=cache,target=/r  37.2s
------                                                                          
 > [invokeai-nvidia builder 7/7] RUN --mount=type=cache,target=/root/.cache/pip     python3 -m venv /opt/venv/invokeai &&    if [ "linux/amd64" = "linux/arm64" ] || [ "nvidia" = "cpu" ]; then         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu";     elif [ "nvidia" = "rocm" ]; then         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6";     else         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121";     fi &&    if [ "nvidia" = "cuda" ] && [ "linux/amd64" = "linux/amd64" ]; then         pip install $extra_index_url_arg -e ".[xformers]";     else         pip install $extra_index_url_arg -e ".";     fi:
1.571 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
1.571 Obtaining file:///opt/invokeai
1.578   Installing build dependencies: started
4.460   Installing build dependencies: finished with status 'done'
4.461   Checking if build backend supports build_editable: started
4.542   Checking if build backend supports build_editable: finished with status 'done'
4.543   Getting requirements to build editable: started
4.697   Getting requirements to build editable: finished with status 'done'
4.698   Preparing editable metadata (pyproject.toml): started
4.852   Preparing editable metadata (pyproject.toml): finished with status 'done'
5.485 Collecting accelerate==0.30.1
5.489   Using cached accelerate-0.30.1-py3-none-any.whl (302 kB)
5.916 Collecting clip-anytorch==2.6.0
5.924   Using cached clip_anytorch-2.6.0-py3-none-any.whl (1.4 MB)
6.374 Collecting compel==2.0.2
6.377   Using cached compel-2.0.2-py3-none-any.whl (30 kB)
6.862 Collecting controlnet-aux==0.0.7
6.865   Using cached controlnet_aux-0.0.7.tar.gz (202 kB)
6.908   Preparing metadata (setup.py): started
7.008   Preparing metadata (setup.py): finished with status 'done'
7.448 Collecting diffusers[torch]==0.27.2
7.459   Using cached diffusers-0.27.2-py3-none-any.whl (2.0 MB)
7.928 Collecting invisible-watermark==0.2.0
7.939   Using cached invisible_watermark-0.2.0-py3-none-any.whl (1.6 MB)
8.411 Collecting mediapipe==0.10.7
8.458   Using cached mediapipe-0.10.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (32.5 MB)
9.229 Collecting numpy==1.26.4
9.246   Using cached numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
9.758 Collecting onnx==1.15.0
9.770   Using cached onnx-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (15.6 MB)
10.35 Collecting onnxruntime==1.16.3
10.36   Using cached onnxruntime-1.16.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.8 MB)
10.95 Collecting opencv-python==4.9.0.80
11.04   Using cached opencv_python-4.9.0.80-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (41.3 MB)
11.67 Collecting pytorch-lightning==2.1.3
11.68   Using cached pytorch_lightning-2.1.3-py3-none-any.whl (777 kB)
12.28 Collecting safetensors==0.4.3
12.29   Using cached safetensors-0.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.2 MB)
12.81 Collecting timm==0.6.13
12.82   Using cached timm-0.6.13-py3-none-any.whl (549 kB)
13.49 Collecting torch==2.2.2
13.62   Using cached torch-2.2.2-cp311-cp311-manylinux2014_aarch64.whl (86.6 MB)
14.24 Collecting torchmetrics==0.11.4
14.24   Using cached torchmetrics-0.11.4-py3-none-any.whl (519 kB)
14.72 Collecting torchsde==0.2.6
14.73   Using cached torchsde-0.2.6-py3-none-any.whl (61 kB)
15.41 Collecting torchvision==0.17.2
15.42   Using cached torchvision-0.17.2-cp311-cp311-manylinux2014_aarch64.whl (14.0 MB)
15.95 Collecting transformers==4.41.1
15.96   Using cached transformers-4.41.1-py3-none-any.whl (9.1 MB)
16.50 Collecting fastapi-events==0.11.0
16.51   Using cached fastapi_events-0.11.0-py3-none-any.whl (28 kB)
17.07 Collecting fastapi==0.111.0
17.07   Using cached fastapi-0.111.0-py3-none-any.whl (91 kB)
17.58 Collecting huggingface-hub==0.23.1
17.59   Using cached huggingface_hub-0.23.1-py3-none-any.whl (401 kB)
18.11 Collecting pydantic-settings==2.2.1
18.11   Using cached pydantic_settings-2.2.1-py3-none-any.whl (13 kB)
18.72 Collecting pydantic==2.7.2
18.72   Using cached pydantic-2.7.2-py3-none-any.whl (409 kB)
19.20 Collecting python-socketio==5.11.1
19.20   Using cached python_socketio-5.11.1-py3-none-any.whl (75 kB)
19.78 Collecting uvicorn[standard]==0.28.0
19.78   Using cached uvicorn-0.28.0-py3-none-any.whl (60 kB)
20.32 Collecting albumentations
20.32   Using cached albumentations-1.4.8-py3-none-any.whl (156 kB)
20.87 Collecting blake3
20.87   Using cached blake3-0.4.1.tar.gz (117 kB)
20.89   Installing build dependencies: started
37.06   Installing build dependencies: finished with status 'done'
37.06   Getting requirements to build wheel: started
37.08   Getting requirements to build wheel: finished with status 'done'
37.08   Preparing metadata (pyproject.toml): started
37.10   Preparing metadata (pyproject.toml): finished with status 'error'
37.10   error: subprocess-exited-with-error
37.10   
37.10   × Preparing metadata (pyproject.toml) did not run successfully.
37.10   │ exit code: 1
37.10   ╰─> [6 lines of output]
37.10       
37.10       Cargo, the Rust package manager, is not installed or is not on PATH.
37.10       This package requires Rust and Cargo to compile extensions. Install it through
37.10       the system's package manager or via https://rustup.rs/
37.10       
37.10       Checking for Rust toolchain....
37.10       [end of output]
37.10   
37.10   note: This error originates from a subprocess, and is likely not a problem with pip.
37.10 error: metadata-generation-failed
37.10 
37.10 × Encountered error while generating package metadata.
37.10 ╰─> See above for output.
37.10 
37.10 note: This is an issue with the package mentioned above, not pip.
37.10 hint: See above for details.
------
failed to solve: process "/bin/sh -c python3 -m venv ${VIRTUAL_ENV} &&    if [ \"$TARGETPLATFORM\" = \"linux/arm64\" ] || [ \"$GPU_DRIVER\" = \"cpu\" ]; then         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/cpu\";     elif [ \"$GPU_DRIVER\" = \"rocm\" ]; then         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/rocm5.6\";     else         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/cu121\";     fi &&    if [ \"$GPU_DRIVER\" = \"cuda\" ] && [ \"$TARGETPLATFORM\" = \"linux/amd64\" ]; then         pip install $extra_index_url_arg -e \".[xformers]\";     else         pip install $extra_index_url_arg -e \".\";     fi" did not complete successfully: exit code: 1
ebr commented 5 months ago

could you please try docker compose build --no-cache?

SergKlein commented 5 months ago

This is latest version and error after I use command what you proposed

`docker compose build --no-cache
[+] Building 121.9s (24/33)                                docker:desktop-linux
 => [invokeai-nvidia internal] load .dockerignore                          0.0s
 => => transferring context: 151B                                          0.0s
 => [invokeai-nvidia internal] load build definition from Dockerfile       0.0s
 => => transferring dockerfile: 4.08kB                                     0.0s
 => [invokeai-nvidia] resolve image config for docker.io/docker/dockerfil  1.6s
 => [invokeai-nvidia auth] docker/dockerfile:pull token for registry-1.do  0.0s
 => CACHED [invokeai-nvidia] docker-image://docker.io/docker/dockerfile:1  0.0s
 => [invokeai-nvidia internal] load metadata for docker.io/library/ubuntu  1.3s
 => [invokeai-nvidia internal] load metadata for docker.io/library/node:2  1.1s
 => [invokeai-nvidia auth] library/node:pull token for registry-1.docker.  0.0s
 => [invokeai-nvidia auth] library/ubuntu:pull token for registry-1.docke  0.0s
 => CACHED [invokeai-nvidia runtime  1/11] FROM docker.io/library/ubuntu:  0.0s
 => [invokeai-nvidia internal] load build context                          0.1s
 => => transferring context: 150.99kB                                      0.1s
 => CACHED [invokeai-nvidia web-builder 1/6] FROM docker.io/library/node:  0.0s
 => [invokeai-nvidia builder 2/7] RUN rm -f /etc/apt/apt.conf.d/docker-cl  0.3s
 => [invokeai-nvidia web-builder 2/6] RUN corepack enable                  0.3s
 => [invokeai-nvidia runtime  2/11] RUN apt update && apt install -y --n  80.9s
 => [invokeai-nvidia builder 3/7] RUN --mount=type=cache,target=/var/cac  56.8s
 => [invokeai-nvidia web-builder 3/6] WORKDIR /build                       0.0s
 => [invokeai-nvidia web-builder 4/6] COPY invokeai/frontend/web/ ./       0.2s
 => [invokeai-nvidia web-builder 5/6] RUN --mount=type=cache,target=/pnp  29.2s
 => [invokeai-nvidia web-builder 6/6] RUN npx vite build                  16.1s
 => [invokeai-nvidia builder 4/7] WORKDIR /opt/invokeai                    0.0s
 => [invokeai-nvidia builder 5/7] COPY invokeai ./invokeai                 0.1s
 => [invokeai-nvidia builder 6/7] COPY pyproject.toml ./                   0.0s
 => ERROR [invokeai-nvidia builder 7/7] RUN --mount=type=cache,target=/r  60.9s
------                                                                          
 > [invokeai-nvidia builder 7/7] RUN --mount=type=cache,target=/root/.cache/pip     python3 -m venv /opt/venv/invokeai &&    if [ "linux/amd64" = "linux/arm64" ] || [ "nvidia" = "cpu" ]; then         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu";     elif [ "nvidia" = "rocm" ]; then         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6";     else         extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121";     fi &&    if [ "nvidia" = "cuda" ] && [ "linux/amd64" = "linux/amd64" ]; then         pip install $extra_index_url_arg -e ".[xformers]";     else         pip install $extra_index_url_arg -e ".";     fi:
1.675 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
1.675 Obtaining file:///opt/invokeai
1.683   Installing build dependencies: started
5.376   Installing build dependencies: finished with status 'done'
5.378   Checking if build backend supports build_editable: started
5.470   Checking if build backend supports build_editable: finished with status 'done'
5.470   Getting requirements to build editable: started
5.649   Getting requirements to build editable: finished with status 'done'
5.650   Preparing editable metadata (pyproject.toml): started
5.835   Preparing editable metadata (pyproject.toml): finished with status 'done'
6.528 Collecting accelerate==0.30.1
6.776   Downloading accelerate-0.30.1-py3-none-any.whl (302 kB)
6.989      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.6/302.6 kB 1.4 MB/s eta 0:00:00
7.654 Collecting clip-anytorch==2.6.0
7.677   Downloading clip_anytorch-2.6.0-py3-none-any.whl (1.4 MB)
8.339      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 2.1 MB/s eta 0:00:00
8.880 Collecting compel==2.0.2
8.903   Downloading compel-2.0.2-py3-none-any.whl (30 kB)
9.505 Collecting controlnet-aux==0.0.7
9.529   Downloading controlnet_aux-0.0.7.tar.gz (202 kB)
9.613      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 202.4/202.4 kB 2.4 MB/s eta 0:00:00
9.633   Preparing metadata (setup.py): started
9.735   Preparing metadata (setup.py): finished with status 'done'
10.32 Collecting diffusers[torch]==0.27.2
10.34   Downloading diffusers-0.27.2-py3-none-any.whl (2.0 MB)
11.07      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 2.8 MB/s eta 0:00:00
11.64 Collecting invisible-watermark==0.2.0
11.67   Downloading invisible_watermark-0.2.0-py3-none-any.whl (1.6 MB)
12.18      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 3.1 MB/s eta 0:00:00
12.65 Collecting mediapipe==0.10.7
12.87   Downloading mediapipe-0.10.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (32.5 MB)
23.55      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 32.5/32.5 MB 4.2 MB/s eta 0:00:00
24.23 Collecting numpy==1.26.4
24.28   Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
26.18      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 7.5 MB/s eta 0:00:00
26.69 Collecting onnx==1.15.0
26.81   Downloading onnx-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (15.6 MB)
30.20      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.6/15.6 MB 4.2 MB/s eta 0:00:00
30.81 Collecting onnxruntime==1.16.3
30.93   Downloading onnxruntime-1.16.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.8 MB)
31.94      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 5.7 MB/s eta 0:00:00
32.51 Collecting opencv-python==4.9.0.80
32.55   Downloading opencv_python-4.9.0.80-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (41.3 MB)
37.25      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.3/41.3 MB 9.4 MB/s eta 0:00:00
37.79 Collecting pytorch-lightning==2.1.3
37.83   Downloading pytorch_lightning-2.1.3-py3-none-any.whl (777 kB)
37.90      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 777.7/777.7 kB 10.5 MB/s eta 0:00:00
38.50 Collecting safetensors==0.4.3
38.53   Downloading safetensors-0.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.2 MB)
38.65      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 10.3 MB/s eta 0:00:00
39.16 Collecting timm==0.6.13
39.18   Downloading timm-0.6.13-py3-none-any.whl (549 kB)
39.24      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 549.1/549.1 kB 9.9 MB/s eta 0:00:00
39.98 Collecting torch==2.2.2
40.00   Downloading torch-2.2.2-cp311-cp311-manylinux2014_aarch64.whl (86.6 MB)
47.48      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.6/86.6 MB 10.4 MB/s eta 0:00:00
48.11 Collecting torchmetrics==0.11.4
48.14   Downloading torchmetrics-0.11.4-py3-none-any.whl (519 kB)
48.17      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 519.2/519.2 kB 23.7 MB/s eta 0:00:00
48.73 Collecting torchsde==0.2.6
48.76   Downloading torchsde-0.2.6-py3-none-any.whl (61 kB)
48.76      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 9.6 MB/s eta 0:00:00
49.48 Collecting torchvision==0.17.2
49.50   Downloading torchvision-0.17.2-cp311-cp311-manylinux2014_aarch64.whl (14.0 MB)
50.98      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 13.1 MB/s eta 0:00:00
51.60 Collecting transformers==4.41.1
51.63   Downloading transformers-4.41.1-py3-none-any.whl (9.1 MB)
52.31      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.1/9.1 MB 13.5 MB/s eta 0:00:00
52.90 Collecting fastapi-events==0.11.0
52.93   Downloading fastapi_events-0.11.0-py3-none-any.whl (28 kB)
53.48 Collecting fastapi==0.111.0
53.51   Downloading fastapi-0.111.0-py3-none-any.whl (91 kB)
53.52      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.0/92.0 kB 22.5 MB/s eta 0:00:00
54.02 Collecting huggingface-hub==0.23.1
54.10   Downloading huggingface_hub-0.23.1-py3-none-any.whl (401 kB)
54.13      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 401.3/401.3 kB 17.1 MB/s eta 0:00:00
54.67 Collecting pydantic-settings==2.2.1
54.69   Downloading pydantic_settings-2.2.1-py3-none-any.whl (13 kB)
55.39 Collecting pydantic==2.7.2
55.42   Downloading pydantic-2.7.2-py3-none-any.whl (409 kB)
55.47      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 409.5/409.5 kB 9.7 MB/s eta 0:00:00
56.01 Collecting python-socketio==5.11.1
56.05   Downloading python_socketio-5.11.1-py3-none-any.whl (75 kB)
56.06      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.5/75.5 kB 10.9 MB/s eta 0:00:00
56.61 Collecting uvicorn[standard]==0.28.0
56.63   Downloading uvicorn-0.28.0-py3-none-any.whl (60 kB)
56.64      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.6/60.6 kB 30.6 MB/s eta 0:00:00
57.23 Collecting albumentations
57.26   Downloading albumentations-1.4.8-py3-none-any.whl (156 kB)
57.27      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.8/156.8 kB 25.2 MB/s eta 0:00:00
57.87 Collecting blake3
57.90   Downloading blake3-0.4.1.tar.gz (117 kB)
57.91      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.7/117.7 kB 19.8 MB/s eta 0:00:00
57.93   Installing build dependencies: started
60.51   Installing build dependencies: finished with status 'done'
60.51   Getting requirements to build wheel: started
60.53   Getting requirements to build wheel: finished with status 'done'
60.53   Preparing metadata (pyproject.toml): started
60.55   Preparing metadata (pyproject.toml): finished with status 'error'
60.56   error: subprocess-exited-with-error
60.56   
60.56   × Preparing metadata (pyproject.toml) did not run successfully.
60.56   │ exit code: 1
60.56   ╰─> [6 lines of output]
60.56       
60.56       Cargo, the Rust package manager, is not installed or is not on PATH.
60.56       This package requires Rust and Cargo to compile extensions. Install it through
60.56       the system's package manager or via https://rustup.rs/
60.56       
60.56       Checking for Rust toolchain....
60.56       [end of output]
60.56   
60.56   note: This error originates from a subprocess, and is likely not a problem with pip.
60.56 error: metadata-generation-failed
60.56 
60.56 × Encountered error while generating package metadata.
60.56 ╰─> See above for output.
60.56 
60.56 note: This is an issue with the package mentioned above, not pip.
60.56 hint: See above for details.
------
failed to solve: process "/bin/sh -c python3 -m venv ${VIRTUAL_ENV} &&    if [ \"$TARGETPLATFORM\" = \"linux/arm64\" ] || [ \"$GPU_DRIVER\" = \"cpu\" ]; then         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/cpu\";     elif [ \"$GPU_DRIVER\" = \"rocm\" ]; then         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/rocm5.6\";     else         extra_index_url_arg=\"--extra-index-url https://download.pytorch.org/whl/cu121\";     fi &&    if [ \"$GPU_DRIVER\" = \"cuda\" ] && [ \"$TARGETPLATFORM\" = \"linux/amd64\" ]; then         pip install $extra_index_url_arg -e \".[xformers]\";     else         pip install $extra_index_url_arg -e \".\";     fi" did not complete successfully: exit code: 1
`
ebr commented 5 months ago

We were still unable to reproduce this on any of our systems. So far this doesn't look like an "us" problem. I'll try on a fresh Ubuntu install and report back.