Closed Rachneet closed 1 year ago
I ma trying to run the fastchat docker but the model never gets loaded by the model-worker. Error below
fastchat-docker-fastchat-worker-1 | 2023-09-26 08:04:49 | INFO | model_worker | args: Namespace(awq_ckpt=None, awq_groupsize=-1, awq_wbits=16, controller_address='http://fastchat-controller:21001', conv_template=None, cpu_offloading=False, device='cpu', dtype=None, embed_in_truncate=False, gptq_act_order=False, gptq_ckpt=None, gptq_groupsize=-1, gptq_wbits=16, gpus=None, host='127.0.0.1', limit_worker_concurrency=5, load_8bit=False, max_gpu_memory=None, model_names=None, model_path='lmsys/fastchat-t5-3b-v1.0', no_register=False, num_gpus=0, port=21002, revision='main', seed=None, stream_interval=2, worker_address='http://fastchat-worker:21002') fastchat-docker-fastchat-worker-1 | 2023-09-26 08:04:49 | INFO | model_worker | Loading the model ['fastchat-t5-3b-v1.0'] on worker 940a63ab ... fastchat-docker-fastchat-worker-1 | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /lmsys/fastchat-t5-3b-v1.0/resolve/main/spiece.model (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f933b9c34f0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: b3b83317-bab3-46e3-9027-7917d5d1ab4f)')' thrown while requesting HEAD https://huggingface.co/lmsys/fastchat-t5-3b-v1.0/resolve/main/spiece.model fastchat-docker-fastchat-worker-1 | 2023-09-26 08:05:29 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /lmsys/fastchat-t5-3b-v1.0/resolve/main/spiece.model (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f933b9c34f0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: b3b83317-bab3-46e3-9027-7917d5d1ab4f)')' thrown while requesting HEAD https://huggingface.co/lmsys/fastchat-t5-3b-v1.0/resolve/main/spiece.model
Any ideas why this may be happening?
Just loaded from locally saved model.
I ma trying to run the fastchat docker but the model never gets loaded by the model-worker. Error below
Any ideas why this may be happening?