INFO 10-02 08:07:17 shm_broadcast.py:241] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1, 2, 3], buffer=<vllm.distributed.device_communicators.shm_broadcast.ShmRingBuffer object at 0x14fbd35198e0>, local_subscribe_port=37383, remote_subscribe_port=None)
INFO 10-02 08:07:18 model_runner.py:1014] Starting to load model mistralai/Mistral-Large-Instruct-2407...
(VllmWorkerProcess pid=2089121) INFO 10-02 08:07:18 model_runner.py:1014] Starting to load model mistralai/Mistral-Large-Instruct-2407...
(VllmWorkerProcess pid=2089122) INFO 10-02 08:07:18 model_runner.py:1014] Starting to load model mistralai/Mistral-Large-Instruct-2407...
(VllmWorkerProcess pid=2089123) INFO 10-02 08:07:18 model_runner.py:1014] Starting to load model mistralai/Mistral-Large-Instruct-2407...
(VllmWorkerProcess pid=2089122) INFO 10-02 08:07:20 weight_utils.py:242] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=2089121) INFO 10-02 08:07:20 weight_utils.py:242] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=2089123) INFO 10-02 08:07:20 weight_utils.py:242] Using model weights format ['*.safetensors']
INFO 10-02 08:07:20 weight_utils.py:242] Using model weights format ['*.safetensors']
vllm server fails to load model in 30min
last logs: