Tencent / HunyuanDiT

Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
https://dit.hunyuan.tencent.com/
Other
2.63k stars 190 forks source link

httpx.ReadTimeout: timed out #26

Closed whmc76 closed 1 month ago

whmc76 commented 1 month ago

(HunyuanDiT) PS E:\IMAGE\HunyuanDiT> python app/hydit_app.py --no-enhance flash_attn import failed: No module named 'flash_attn' 2024-05-16 11:42:25.899 | INFO | hydit.inference:init:160 - Got text-to-image model root path: ckpts\t2i 2024-05-16 11:42:25.916 | INFO | hydit.inference:init:172 - Loading CLIP Text Encoder... 2024-05-16 11:42:27.218 | INFO | hydit.inference:init:175 - Loading CLIP Text Encoder finished 2024-05-16 11:42:27.218 | INFO | hydit.inference:init:178 - Loading CLIP Tokenizer... 2024-05-16 11:42:27.242 | INFO | hydit.inference:init:181 - Loading CLIP Tokenizer finished 2024-05-16 11:42:27.242 | INFO | hydit.inference:init:184 - Loading T5 Text Encoder and T5 Tokenizer... You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\transformers\convert_slow_tokenizer.py:515: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text. warnings.warn( You are using a model of type mt5 to instantiate a model of type t5. This is not supported for all configurations of models and can yield errors. 2024-05-16 11:42:37.485 | INFO | hydit.inference:init:188 - Loading t5_text_encoder and t5_tokenizer finished 2024-05-16 11:42:37.485 | INFO | hydit.inference:init:191 - Loading VAE... 2024-05-16 11:42:37.633 | INFO | hydit.inference:init:194 - Loading VAE finished 2024-05-16 11:42:37.633 | INFO | hydit.inference:init:198 - Building HunYuan-DiT model... 2024-05-16 11:42:37.907 | INFO | hydit.modules.models:init:229 - Number of tokens: 4096 2024-05-16 11:42:46.679 | INFO | hydit.inference:init:218 - Loading model checkpoint ckpts\t2i\model\pytorch_model_ema.pt... 2024-05-16 11:42:48.218 | INFO | hydit.inference:init:229 - Loading inference pipeline... 2024-05-16 11:42:48.233 | INFO | hydit.inference:init:231 - Loading pipeline finished 2024-05-16 11:42:48.233 | INFO | hydit.inference:init:235 - ================================================== 2024-05-16 11:42:48.233 | INFO | hydit.inference:init:236 - Model is ready. 2024-05-16 11:42:48.233 | INFO | hydit.inference:init:237 - ================================================== Running on local URL: http://0.0.0.0:443 Traceback (most recent call last): File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_transports\default.py", line 69, in map_httpcore_exceptions yield File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\connection_pool.py", line 216, in handle_request raise exc from None File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\connection_pool.py", line 196, in handle_request response = connection.handle_request( File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\http_proxy.py", line 207, in handle_request return self._connection.handle_request(proxy_request) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\connection.py", line 101, in handle_request return self._connection.handle_request(request) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\http11.py", line 143, in handle_request raise exc File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\http11.py", line 113, in handle_request ) = self._receive_response_headers(**kwargs) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\http11.py", line 186, in _receive_response_headers event = self._receive_event(timeout=timeout) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_sync\http11.py", line 224, in _receive_event data = self._network_stream.read( File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_backends\sync.py", line 126, in read return self._sock.recv(max_bytes) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\contextlib.py", line 131, in exit self.gen.throw(type, value, traceback) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpcore_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ReadTimeout: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "app/hydit_app.py", line 170, in interface.launch(server_name="0.0.0.0", server_port=443, share=True) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\gradio\blocks.py", line 2351, in launch and not networking.url_ok(self.local_url) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\gradio\networking.py", line 54, in url_ok r = httpx.head(url, timeout=3, verify=False) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_api.py", line 278, in head return request( File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_api.py", line 106, in request return client.request( File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_client.py", line 827, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_client.py", line 914, in send response = self._send_handling_auth( File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_client.py", line 942, in _send_handling_auth response = self._send_handling_redirects( File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_client.py", line 979, in _send_handling_redirects response = self._send_single_request(request) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_client.py", line 1015, in _send_single_request response = transport.handle_request(request) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\contextlib.py", line 131, in exit self.gen.throw(type, value, traceback) File "C:\Users\28687.conda\envs\HunyuanDiT\lib\site-packages\httpx_transports\default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ReadTimeout: timed out

zml-ai commented 1 month ago

Hi, thanks for your interest! You could try downgrading the version of Gradio from 4.19.2 to 3.50.2. We’ll update the recommended version of Gradio in the requirements.txt.

Jarvis73 commented 3 weeks ago

@whmc76 According to the issue in gradio repo and the PR, you can update gradio and disable the analytics with following command:

pip install gradio>=4.21.0
export GRADIO_ANALYTICS_ENABLED=False
whmc76 commented 3 weeks ago

@whmc76 According to the issue in gradio repo and the PR, you can update gradio and disable the analytics with following command:

pip install gradio>=4.21.0
export GRADIO_ANALYTICS_ENABLED=False

got it