JarodMica / ai-voice-cloning

GNU General Public License v3.0
430 stars 87 forks source link

Docker linux This site can’t be reached #92

Open underthesand opened 2 months ago

underthesand commented 2 months ago

Hello fresh install ubuntu + docker

script setup work fine but after start script http://127.0.0.1:7860/ can’t be reached

work with shared=True

./start-docker.sh 

==========
== CUDA ==
==========

CUDA Version 12.2.0

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

[2024-04-28 12:28:56,124] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
df: /home/user/.triton/autotune: No such file or directory
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
 [WARNING]  using untested triton version (2.3.0), only 1.0.0 is known to be compatible
INFO:rvc.configs.config:Found GPU NVIDIA GeForce RTX 4090
Whisper detected
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:httpx:load_verify_locations cafile='/home/user/miniconda/lib/python3.11/site-packages/certifi/cacert.pem'
DEBUG:httpcore.connection:connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None
/home/user/miniconda/lib/python3.11/site-packages/gradio/components/dropdown.py:179: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include:  or set allow_custom_value=True.
  warnings.warn(
/home/user/miniconda/lib/python3.11/site-packages/gradio/utils.py:858: UserWarning: Expected 1 arguments for function <function update_voices at 0x78bed86bd6c0>, received 0.
  warnings.warn(
/home/user/miniconda/lib/python3.11/site-packages/gradio/utils.py:862: UserWarning: Expected at least 1 arguments for function <function update_voices at 0x78bed86bd6c0>, received 0.
  warnings.warn(
DEBUG:asyncio:Using selector: EpollSelector
Running on local URL:  http://127.0.0.1:7860
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:httpx:load_verify_locations cafile='/home/user/miniconda/lib/python3.11/site-packages/certifi/cacert.pem'
DEBUG:httpcore.connection:connect_tcp.started host='127.0.0.1' port=7860 local_address=None timeout=None socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x78bed5c6bf10>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'GET']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'GET']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'GET']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Sun, 28 Apr 2024 12:28:57 GMT'), (b'server', b'uvicorn'), (b'content-length', b'5'), (b'content-type', b'application/json'), (b'access-control-allow-methods', b'GET, POST, PUT, DELETE, OPTIONS'), (b'access-control-allow-headers', b'Origin, Content-Type, Accept')])
INFO:httpx:HTTP Request: GET http://127.0.0.1:7860/startup-events "HTTP/1.1 200 OK"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'GET']>
DEBUG:httpcore.http11:receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
DEBUG:httpcore.connection:close.started
DEBUG:httpcore.connection:close.complete
DEBUG:httpx:load_ssl_context verify=False cert=None trust_env=True http2=False
DEBUG:httpcore.connection:connect_tcp.started host='127.0.0.1' port=7860 local_address=None timeout=3 socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x78bed87472d0>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'HEAD']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'HEAD']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'HEAD']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Sun, 28 Apr 2024 12:28:57 GMT'), (b'server', b'uvicorn'), (b'content-length', b'133635'), (b'content-type', b'text/html; charset=utf-8'), (b'access-control-allow-methods', b'GET, POST, PUT, DELETE, OPTIONS'), (b'access-control-allow-headers', b'Origin, Content-Type, Accept')])
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'HEAD']>
DEBUG:httpcore.http11:receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
DEBUG:httpcore.connection:close.started
DEBUG:httpcore.connection:close.complete

To create a public link, set `share=True` in `launch()`.
Loading TorToiSe... (AR: None, diffusion: None, vocoder: bigvgan_24khz_100band)
Hardware acceleration found: cuda
use_deepspeed api_debug False
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x78bed8756a90>
DEBUG:httpcore.connection:start_tls.started ssl_context=<ssl.SSLContext object at 0x78bed8a9e2a0> server_hostname='api.gradio.app' timeout=3
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /jbetker/wav2vec2-large-robust-ft-libritts-voxpopuli/resolve/main/config.json HTTP/1.1" 200 0
DEBUG:httpcore.connection:start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x78bed5d13710>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'GET']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'GET']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'GET']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 28 Apr 2024 12:28:58 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')])
INFO:httpx:HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'GET']>
DEBUG:httpcore.http11:receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
DEBUG:httpcore.connection:close.started
DEBUG:httpcore.connection:close.complete
/home/user/miniconda/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
  warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /facebook/wav2vec2-large-960h/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /jbetker/tacotron-symbols/resolve/main/vocab.json HTTP/1.1" 200 0
Loading tokenizer JSON: /home/user/ai-voice-cloning/modules/tortoise-tts/tortoise/../tortoise/data/tokenizer.json
Loaded tokenizer
Loading autoregressive model: /home/user/ai-voice-cloning/models/tortoise/autoregressive.pth
Loaded autoregressive model
Loaded diffusion model
Loading vocoder model: bigvgan_24khz_100band
Loading vocoder model: bigvgan_24khz_100band.pth
Removing weight norm...
Loaded vocoder model
Loaded TTS, ready for generation.
zhenliu commented 2 months ago

Maybe similar issue. I have a similar setup with wsl + docker. The page return "ERR_EMPTY_RESPONSE". I can get the correct content if I curl localhost:7860 inside the container. I tried start a simple python web server inside the container which works fine.

zhenliu commented 2 months ago

Found the solution to my issue. https://github.com/JarodMica/ai-voice-cloning/blob/master/src/main.py#L34, the default value for server name is none. But it requires 0.0.0.0 to be accessed remotely. In this case, access out of the container seems treated as remote. Hard coded the server name as 0.0.0.0 fixed the problem.

maepopi commented 1 month ago

Sadly, that didn't work for me...Can you write the exact line you wrote so I'm sure to be doing the same thing?

EDIT : nevermind, it did work, you just have to remember to re run "setup-docker.sh" after changing the main.py script :) thank you friend!!!!

MatthiasJonen commented 1 month ago

Found the solution to my issue. https://github.com/JarodMica/ai-voice-cloning/blob/master/src/main.py#L34, the default value for server name is none. But it requires 0.0.0.0 to be accessed remotely. In this case, access out of the container seems treated as remote. Hard coded the server name as 0.0.0.0 fixed the problem.

Thank you for your comments! for me this worked fine: change from server_name=args.listen_host to server_name="0.0.0.0", then a rebuild with setup-docker.sh and the browser has access to the url.

Vostav commented 1 month ago

i have same issue as above though editing the main.py file at #L34, did not fix the issue itself still produces when using the link in any browser i get this error on the web page, The connection was reset

The connection to the server was reset while the page was loading.

The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.

I was able to get the hello world docker test to work before i fully set up ai-voice-cloning to this point i am using garuda linux docker package