Closed KiritoGuugk closed 2 years ago
HI!
If needed you can download files by yourself and place it in proper places within "data" directory structure.
It seems that "download" service is using aria2c
. There is a good part of section in manual about proxy for this software: https://aria2.github.io/manual/en/html/aria2c.html#environment
It is worth a try to pass environment variable with proxy configuration to download service, so aria2c will use defined proxy.
My approach here is to create docker-compose.override.yml
file (if you haven't already created it yet), and inject proxy settings to container with following file contents:
services:
download:
environment:
- all_proxy=123.45.67.89:1011
Where 123.45.67.89:1011
is IP:PORT combination of proxy.
Your case is specific, because you've got your proxy on localhost. It might not work at all, because first of all 127.0.0.1
means local container IP and not your host IP.
First make sure, that proxy listens on port 1080 on all interfaces (not only 127.0.0.1), if so, then you can send local machine IP to container with specific entry (extra_hosts section) and in your very case docker-compose.override.yml
should look like that:
services:
download:
environment:
- all_proxy=host.docker.internal:1080
extra_hosts:
host.docker.internal: host-gateway
If your proxy software can only listen on localhost, then there is "no go", and without some firewall black magic it is better to use other proxy server or find out other software that can listen on all interfaces (or at least docker interface).
Good luck :-)
@DevilaN thank you for the detailed answer.
I would also like to add an option:
If the proxy cannot listen on all interfaces, try update the service so that it uses the host network, something like network_mode: host
in your override file, more info here:
services:
download:
network_mode: "host"
this should make your proxy reachable from the container even if it is listening on localhost (in theory at least, I haven't tried it myself).
HI!
If needed you can download files by yourself and place it in proper places within "data" directory structure.
It seems that "download" service is using
aria2c
. There is a good part of section in manual about proxy for this software: https://aria2.github.io/manual/en/html/aria2c.html#environmentIt is worth a try to pass environment variable with proxy configuration to download service, so aria2c will use defined proxy. My approach here is to create
docker-compose.override.yml
file (if you haven't already created it yet), and inject proxy settings to container with following file contents:services: download: environment: - all_proxy=123.45.67.89:1011
Where
123.45.67.89:1011
is IP:PORT combination of proxy.Your case is specific, because you've got your proxy on localhost. It might not work at all, because first of all
127.0.0.1
means local container IP and not your host IP.First make sure, that proxy listens on port 1080 on all interfaces (not only 127.0.0.1), if so, then you can send local machine IP to container with specific entry (extra_hosts section) and in your very case
docker-compose.override.yml
should look like that:services: download: environment: - all_proxy=host.docker.internal:1080 extra_hosts: host.docker.internal: host-gateway
If your proxy software can only listen on localhost, then there is "no go", and without some firewall black magic it is better to use other proxy server or find out other software that can listen on all interfaces (or at least docker interface).
Good luck :-)
Thank you for the detailed answer. I'm using a proxy program called Qv2ray which is the GUI version for v2ray. I don't know if I can change the settings to listen on port 1080 on all interfaces. The default settings in Qv2ray are in the image below(my flameshot doesn't work so I have no choice but to use my camera)
@DevilaN thank you for the detailed answer.
I would also like to add an option:
If the proxy cannot listen on all interfaces, try update the service so that it uses the host network, something like
network_mode: host
in your override file, more info here:services: download: network_mode: "host"
this should make your proxy reachable from the container even if it is listening on localhost (in theory at least, I haven't tried it myself).
Thank you for your help, I create the docker-compose.override.yml as :
services:
download:
environment:
- all_proxy=127.0.0.1:1080
network_mode: "host"
but get this error:
[root@fedora stable-diffusion-webui-docker-master]# docker compose --profile download up --build
[+] Building 1.3s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 91B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/bash:alpine3.15 1.3s
------
> [internal] load metadata for docker.io/library/bash:alpine3.15:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch anonymous token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fbash%3Apull&service=registry.docker.io": proxyconnect tcp: dial tcp 127.0.0.1:8889: connect: connection refused
Thank you for the detailed answer. I'm using a proxy program called Qv2ray which is the GUI version for v2ray. I don't know if I can change the settings to listen on port 1080 on all interfaces. The default settings in Qv2ray are in the image below(my flameshot doesn't work so I have no choice but to use my camera)
I see the listen address in the top, can you change the address from 127.0.0.1
to 0.0.0.0
?
you don't need to change the original docker-compose.yml
file, only the override, you can also takeout the network_mode
and try it with the solution from @DevilaN.
Thank you for the detailed answer. I'm using a proxy program called Qv2ray which is the GUI version for v2ray. I don't know if I can change the settings to listen on port 1080 on all interfaces. The default settings in Qv2ray are in the image below(my flameshot doesn't work so I have no choice but to use my camera)
I see the listen address in the top, can you change the address from
127.0.0.1
to0.0.0.0
?you don't need to change the original
docker-compose.yml
file, only the override, you can also takeout thenetwork_mode
and try it with the solution from @DevilaN.
it works for download so I do the same to other profiles, docker-compose.override.yml:
services:
download:
network_mode: "host"
environment:
- all_proxy=127.0.0.1:1080
hlky:
network_mode: "host"
environment:
- all_proxy=127.0.0.1:1080
auto:
network_mode: "host"
environment:
- all_proxy=127.0.0.1:1080
auto-cpu:
network_mode: "host"
environment:
- all_proxy=127.0.0.1:1080
lstein:
network_mode: "host"
environment:
- all_proxy=127.0.0.1:1080
but when it comes to
docker compose --profile auto-cpu up --build
it fails:
[root@fedora stable-diffusion-webui-docker-master]# docker compose --profile auto-cpu up --build
[+] Building 129.1s (17/31)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 91B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 2.7s
=> CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> [internal] load .dockerignore 0.0s
=> [internal] load metadata for docker.io/library/python:3.10-slim 126.3s
=> [internal] load metadata for docker.io/alpine/git:2.36.2 1.9s
=> [internal] load build context 0.0s
=> => transferring context: 3.95kB 0.0s
=> [download 1/6] FROM docker.io/alpine/git:2.36.2@sha256:ec491c893597b68c92b88023827faa771772cfd5e106b76c713fa5e1c75dea84 0.0s
=> CACHED [xformers 1/3] FROM docker.io/library/python:3.10-slim@sha256:685b1c2ef40bd3ded77b3abd0965d5c16d19a20469be0ac06a3cf1d33f2e6d41 0.0s
=> CACHED [download 2/6] RUN git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion && cd repositories/stable-diffusion && git reset --hard 69ae4b35e0a0f6ee1af8bb9a5d00 0.0s
=> CACHED [download 3/6] RUN git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer && cd repositories/CodeFormer && git reset --hard c5b4593074ba6214284d6acd5f1719b6c5d739af 0.0s
=> CACHED [download 4/6] RUN git clone https://github.com/salesforce/BLIP.git repositories/BLIP && cd repositories/BLIP && git reset --hard 48211a1594f1321b00f14c9f7a5b4813144b2fb9 0.0s
=> CACHED [xformers 2/3] RUN pip install gdown 0.0s
=> ERROR [download 5/6] RUN <<EOF (# because taming-transformers is huge...) 123.7s
=> CANCELED [xformers 3/3] RUN gdown https://drive.google.com/uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw -O /wheel.whl 123.9s
=> CANCELED [stage-2 2/14] RUN pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 123.8s
------
> [download 5/6] RUN <<EOF (# because taming-transformers is huge...):
#0 0.292 + git config --global http.postBuffer 1048576000
#0 0.298 + git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
#0 0.299 Cloning into 'repositories/taming-transformers'...
#0 123.7 error: RPC failed; curl 16 Error in the HTTP2 framing layer
#0 123.7 fatal: expected flush after ref listing
------
failed to solve: executor failed running [/bin/sh -ceuxo pipefail # because taming-transformers is huge
git config --global http.postBuffer 1048576000
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
cd repositories/taming-transformers
git reset --hard 24268930bf1dce879235a7fddd0b2355b84d7ea6
rm -rf data assets
]: exit code: 128
also fails:
[root@fedora stable-diffusion-webui-docker-master]# docker compose --profile auto-cpu up --build
[+] Building 134.5s (17/31)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 91B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 1.6s
=> CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s
=> [internal] load .dockerignore 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> [internal] load metadata for docker.io/alpine/git:2.36.2 1.3s
=> [internal] load metadata for docker.io/library/python:3.10-slim 132.7s
=> [download 1/6] FROM docker.io/alpine/git:2.36.2@sha256:ec491c893597b68c92b88023827faa771772cfd5e106b76c713fa5e1c75dea84 0.0s
=> CACHED [xformers 1/3] FROM docker.io/library/python:3.10-slim@sha256:685b1c2ef40bd3ded77b3abd0965d5c16d19a20469be0ac06a3cf1d33f2e6d41 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 354B 0.0s
=> CACHED [xformers 2/3] RUN pip install gdown 0.0s
=> CACHED [download 2/6] RUN git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion && cd repositories/stable-diffusion && git reset --hard 69ae4b35e0a0f6ee1af8bb9a5d00 0.0s
=> CACHED [download 3/6] RUN git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer && cd repositories/CodeFormer && git reset --hard c5b4593074ba6214284d6acd5f1719b6c5d739af 0.0s
=> CACHED [download 4/6] RUN git clone https://github.com/salesforce/BLIP.git repositories/BLIP && cd repositories/BLIP && git reset --hard 48211a1594f1321b00f14c9f7a5b4813144b2fb9 0.0s
=> ERROR [xformers 3/3] RUN gdown https://drive.google.com/uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw -O /wheel.whl 131.2s
=> CANCELED [download 5/6] RUN <<EOF (# because taming-transformers is huge...) 131.3s
=> CANCELED [stage-2 2/14] RUN pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 131.4s
------
> [xformers 3/3] RUN gdown https://drive.google.com/uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw -O /wheel.whl:
#0 0.234 + gdown 'https://drive.google.com/uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw' -O /wheel.whl
#0 131.2 Traceback (most recent call last):
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
#0 131.2 conn = connection.create_connection(
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection
#0 131.2 raise err
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
#0 131.2 sock.connect(sa)
#0 131.2 TimeoutError: [Errno 110] Connection timed out
#0 131.2
#0 131.2 During handling of the above exception, another exception occurred:
#0 131.2
#0 131.2 Traceback (most recent call last):
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
#0 131.2 httplib_response = self._make_request(
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
#0 131.2 self._validate_conn(conn)
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
#0 131.2 conn.connect()
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
#0 131.2 self.sock = conn = self._new_conn()
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 179, in _new_conn
#0 131.2 raise ConnectTimeoutError(
#0 131.2 urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f408a42dd80>, 'Connection to drive.google.com timed out. (connect timeout=None)')
#0 131.2
#0 131.2 During handling of the above exception, another exception occurred:
#0 131.2
#0 131.2 Traceback (most recent call last):
#0 131.2 File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
#0 131.2 resp = conn.urlopen(
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
#0 131.2 retries = retries.increment(
#0 131.2 File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
#0 131.2 raise MaxRetryError(_pool, url, error or ResponseError(cause))
#0 131.2 urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f408a42dd80>, 'Connection to drive.google.com timed out. (connect timeout=None)'))
#0 131.2
#0 131.2 During handling of the above exception, another exception occurred:
#0 131.2
#0 131.2 Traceback (most recent call last):
#0 131.2 File "/usr/local/bin/gdown", line 8, in <module>
#0 131.2 sys.exit(main())
#0 131.2 File "/usr/local/lib/python3.10/site-packages/gdown/cli.py", line 150, in main
#0 131.2 filename = download(
#0 131.2 File "/usr/local/lib/python3.10/site-packages/gdown/download.py", line 146, in download
#0 131.2 res = sess.get(url, headers=headers, stream=True, verify=verify)
#0 131.2 File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 600, in get
#0 131.2 return self.request("GET", url, **kwargs)
#0 131.2 File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
#0 131.2 resp = self.send(prep, **send_kwargs)
#0 131.2 File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
#0 131.2 r = adapter.send(request, **kwargs)
#0 131.2 File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 553, in send
#0 131.2 raise ConnectTimeout(e, request=request)
#0 131.2 requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f408a42dd80>, 'Connection to drive.google.com timed out. (connect timeout=None)'))
------
failed to solve: executor failed running [/bin/bash -ceuxo pipefail gdown https://drive.google.com/uc?id=1SqwicrLx1TrG_sbbEoIF_3TUHd4EYSmw -O /wheel.whl]: exit code: 1
Ok it seems that you will face this error much often that expected, can you try proxying docker entirely? https://docs.docker.com/network/proxy/#configure-the-docker-client
you won't need the individual overrides anymore.
Ok it seems that you will face this error much often that expected, can you try proxying docker entirely? https://docs.docker.com/network/proxy/#configure-the-docker-client
you won't need the individual overrides anymore.
Thank you and @DevilaN for your patience in answering. It works.
{
"proxies":
{
"default":
{
"httpProxy": "http://172.17.0.1:1080",
"httpsProxy": "http://172.17.0.1:1080"
}
}
}
OK, I fixed myself based your former discussion. For people like me using clash/v2ray on host server to get through the CN GFW or any other firewall:
invoke: &invoke
<<: *base_service
profiles: ["invoke"]
build: ./services/invoke/
image: sd-invoke:30
environment:
- PRELOAD=true
- CLI_ARGS=--xformers
- HTTP_PROXY=http://host.docker.internal:7890
- HTTPS_PROXY=http://host.docker.internal:7890
extra_hosts:
host.docker.internal: host-gateway
Replace 7890
with your proxy port.
Has this issue been opened before?
Describe the bug I am in the country can not access to Google, so I have to use proxy. I have a proxy and I can use it by access 127.0.0.1:1080 on my real computer but it does not work on docker when I ran "docker compose --profile download up --build". I searched "how to use proxy in docker" on Google and try the methods they mention,but none of them worked.
Which UI auto-cpu hlky or auto or auto-cpu or lstein?
Hardware / Software
Steps to Reproduce
Additional context Any other context about the problem here. If applicable, add screenshots to help explain your problem.