Open jayteaftw opened 5 months ago
After testing the container version, I noticed I can get further
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable -v ~/.cache/huggingface:/root/.cache/huggingface --env "HUGGING_FACE_HUB_TOKEN=hf_xxxxxxxx" -p 6370:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Mixtral-8x7B-Instruct-v0.1 --tensor-parallel-size 2 --dtype bfloat16
INFO 04-29 17:10:22 api_server.py:151] vLLM API server version 0.4.1
INFO 04-29 17:10:22 api_server.py:152] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, served_model_name=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='mistralai/Mixtral-8x7B-Instruct-v0.1', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='bfloat16', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=5, disable_log_stats=False, quantization=None, enforce_eager=False, max_context_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, device='auto', image_input_type=None, image_token_id=None, image_input_shape=None, image_feature_size=None, scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_max_model_len=None, model_loader_extra_config=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
2024-04-29 17:10:25,030 WARNING utils.py:580 -- Detecting docker specified CPUs. In previous versions of Ray, CPU detection in containers was incorrect. Please ensure that Ray has enough CPUs allocated. As a temporary workaround to revert to the prior behavior, set `RAY_USE_MULTIPROCESSING_CPU_COUNT=1` as an env var before starting Ray. Set the env var: `RAY_DISABLE_DOCKER_CPU_WARNING=1` to mute this warning.
2024-04-29 17:10:26,203 INFO worker.py:1749 -- Started a local Ray instance.
INFO 04-29 17:10:27 llm_engine.py:98] Initializing an LLM engine (v0.4.1) with config: model='mistralai/Mixtral-8x7B-Instruct-v0.1', speculative_config=None, tokenizer='mistralai/Mixtral-8x7B-Instruct-v0.1', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0)
INFO 04-29 17:10:30 utils.py:608] Found nccl from library /root/.config/vllm/nccl/cu12/libnccl.so.2.18.1
(RayWorkerWrapper pid=1018) INFO 04-29 17:10:30 utils.py:608] Found nccl from library /root/.config/vllm/nccl/cu12/libnccl.so.2.18.1
INFO 04-29 17:10:32 selector.py:28] Using FlashAttention backend.
(RayWorkerWrapper pid=1018) INFO 04-29 17:10:32 selector.py:28] Using FlashAttention backend.
INFO 04-29 17:10:33 pynccl_utils.py:43] vLLM is using nccl==2.18.1
(RayWorkerWrapper pid=1018) INFO 04-29 17:10:33 pynccl_utils.py:43] vLLM is using nccl==2.18.1
INFO 04-29 17:10:38 utils.py:115] generating GPU P2P access cache for in /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
INFO 04-29 17:10:38 utils.py:129] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
(RayWorkerWrapper pid=1018) INFO 04-29 17:10:38 utils.py:129] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
INFO 04-29 17:10:39 weight_utils.py:193] Using model weights format ['*.safetensors']
But it fails after about 10 minutes of loading with the output
INFO 04-29 17:19:06 model_runner.py:173] Loading model weights took 43.5064 GB
(RayWorkerWrapper pid=1018) INFO 04-29 17:19:18 model_runner.py:173] Loading model weights took 43.5064 GB
INFO 04-29 17:19:19 fused_moe.py:299] Using configuration from /usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_A100-SXM4-80GB.json for MoE layer.
(RayWorkerWrapper pid=1018) INFO 04-29 17:19:19 fused_moe.py:299] Using configuration from /usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_A100-SXM4-80GB.json for MoE layer.
ERROR 04-29 17:19:20 worker_base.py:157] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
ERROR 04-29 17:19:20 worker_base.py:157] Traceback (most recent call last):
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 149, in execute_method
ERROR 04-29 17:19:20 worker_base.py:157] return executor(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 04-29 17:19:20 worker_base.py:157] return func(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 138, in determine_num_available_blocks
ERROR 04-29 17:19:20 worker_base.py:157] self.model_runner.profile_run()
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 04-29 17:19:20 worker_base.py:157] return func(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 927, in profile_run
ERROR 04-29 17:19:20 worker_base.py:157] self.execute_model(seqs, kv_caches)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 04-29 17:19:20 worker_base.py:157] return func(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 848, in execute_model
ERROR 04-29 17:19:20 worker_base.py:157] hidden_states = model_executable(**execute_model_kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 419, in forward
ERROR 04-29 17:19:20 worker_base.py:157] hidden_states = self.model(input_ids, positions, kv_caches,
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 353, in forward
ERROR 04-29 17:19:20 worker_base.py:157] hidden_states, residual = layer(positions, hidden_states,
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 312, in forward
ERROR 04-29 17:19:20 worker_base.py:157] hidden_states = self.block_sparse_moe(hidden_states)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 155, in forward
ERROR 04-29 17:19:20 worker_base.py:157] final_hidden_states = fused_moe(hidden_states,
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 434, in fused_moe
ERROR 04-29 17:19:20 worker_base.py:157] invoke_fused_moe_kernel(hidden_states,
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 244, in invoke_fused_moe_kernel
ERROR 04-29 17:19:20 worker_base.py:157] fused_moe_kernel[grid](
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/runtime/jit.py", line 532, in run
ERROR 04-29 17:19:20 worker_base.py:157] self.cache[device][key] = compile(
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 614, in compile
ERROR 04-29 17:19:20 worker_base.py:157] so_path = make_stub(name, signature, constants, ids, enable_warp_specialization=enable_warp_specialization)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/compiler/make_launcher.py", line 37, in make_stub
ERROR 04-29 17:19:20 worker_base.py:157] so = _build(name, src_path, tmpdir)
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/common/build.py", line 71, in _build
ERROR 04-29 17:19:20 worker_base.py:157] cuda_lib_dirs = libcuda_dirs()
ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/common/build.py", line 40, in libcuda_dirs
ERROR 04-29 17:19:20 worker_base.py:157] assert any(os.path.exists(os.path.join(path, 'libcuda.so')) for path in dirs), msg
ERROR 04-29 17:19:20 worker_base.py:157] AssertionError: libcuda.so cannot found!
ERROR 04-29 17:19:20 worker_base.py:157] Possible files are located at ['/lib64/libcuda.so.1'].Please create a symlink of libcuda.so to any of the file.
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 159, in <module>
engine = AsyncLLMEngine.from_engine_args(
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 361, in from_engine_args
engine = cls(
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 319, in __init__
self.engine = self._init_engine(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 437, in _init_engine
return engine_class(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 160, in __init__
self._initialize_kv_caches()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 236, in _initialize_kv_caches
self.model_executor.determine_num_available_blocks())
File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 199, in determine_num_available_blocks
num_blocks = self._run_workers("determine_num_available_blocks", )
File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 318, in _run_workers
driver_worker_output = self.driver_worker.execute_method(
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 158, in execute_method
raise e
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 149, in execute_method
return executor(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 138, in determine_num_available_blocks
self.model_runner.profile_run()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 927, in profile_run
self.execute_model(seqs, kv_caches)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 848, in execute_model
hidden_states = model_executable(**execute_model_kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 419, in forward
hidden_states = self.model(input_ids, positions, kv_caches,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 353, in forward
hidden_states, residual = layer(positions, hidden_states,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 312, in forward
hidden_states = self.block_sparse_moe(hidden_states)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 155, in forward
final_hidden_states = fused_moe(hidden_states,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 434, in fused_moe
invoke_fused_moe_kernel(hidden_states,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 244, in invoke_fused_moe_kernel
fused_moe_kernel[grid](
File "/usr/local/lib/python3.10/dist-packages/triton/runtime/jit.py", line 532, in run
self.cache[device][key] = compile(
File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 614, in compile
so_path = make_stub(name, signature, constants, ids, enable_warp_specialization=enable_warp_specialization)
File "/usr/local/lib/python3.10/dist-packages/triton/compiler/make_launcher.py", line 37, in make_stub
so = _build(name, src_path, tmpdir)
File "/usr/local/lib/python3.10/dist-packages/triton/common/build.py", line 71, in _build
cuda_lib_dirs = libcuda_dirs()
File "/usr/local/lib/python3.10/dist-packages/triton/common/build.py", line 40, in libcuda_dirs
assert any(os.path.exists(os.path.join(path, 'libcuda.so')) for path in dirs), msg
AssertionError: libcuda.so cannot found!
Possible files are located at ['/lib64/libcuda.so.1'].Please create a symlink of libcuda.so to any of the file.
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] Traceback (most recent call last):
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 149, in execute_method
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return executor(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return func(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 138, in determine_num_available_blocks
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] self.model_runner.profile_run()
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return func(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 927, in profile_run
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] self.execute_model(seqs, kv_caches)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return func(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 848, in execute_model
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] hidden_states = model_executable(**execute_model_kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 419, in forward
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] hidden_states = self.model(input_ids, positions, kv_caches,
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 353, in forward
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] hidden_states, residual = layer(positions, hidden_states,
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 312, in forward
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] hidden_states = self.block_sparse_moe(hidden_states)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mixtral.py", line 155, in forward
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] final_hidden_states = fused_moe(hidden_states,
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 434, in fused_moe
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] invoke_fused_moe_kernel(hidden_states,
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 244, in invoke_fused_moe_kernel
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] fused_moe_kernel[grid](
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/runtime/jit.py", line 532, in run
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] self.cache[device][key] = compile(
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 614, in compile
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] so_path = make_stub(name, signature, constants, ids, enable_warp_specialization=enable_warp_specialization)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/compiler/make_launcher.py", line 37, in make_stub
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] so = _build(name, src_path, tmpdir)
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/common/build.py", line 71, in _build
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] cuda_lib_dirs = libcuda_dirs()
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] File "/usr/local/lib/python3.10/dist-packages/triton/common/build.py", line 40, in libcuda_dirs
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] assert any(os.path.exists(os.path.join(path, 'libcuda.so')) for path in dirs), msg
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] AssertionError: libcuda.so cannot found!
(RayWorkerWrapper pid=1018) ERROR 04-29 17:19:20 worker_base.py:157] Possible files are located at ['/lib64/libcuda.so.1'].Please create a symlink of libcuda.so to any of the file.
[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
Interestingly though, I can run mistralai/Mistral-7B-Instruct-v0.2 perfectly fine on 2 GPUs
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable -v ~/.cache/huggingface:/root/.cache/huggingface --env "HUGGING_FACE_HUB_TOKEN=hf_xxxxxxx" -p 6370:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Mistral-7B-Instruct-v0.2 --tensor-parallel-size 2
INFO 04-29 17:22:26 api_server.py:151] vLLM API server version 0.4.1
INFO 04-29 17:22:26 api_server.py:152] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, served_model_name=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='mistralai/Mistral-7B-Instruct-v0.2', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=5, disable_log_stats=False, quantization=None, enforce_eager=False, max_context_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, device='auto', image_input_type=None, image_token_id=None, image_input_shape=None, image_feature_size=None, scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_max_model_len=None, model_loader_extra_config=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
2024-04-29 17:22:28,832 WARNING utils.py:580 -- Detecting docker specified CPUs. In previous versions of Ray, CPU detection in containers was incorrect. Please ensure that Ray has enough CPUs allocated. As a temporary workaround to revert to the prior behavior, set `RAY_USE_MULTIPROCESSING_CPU_COUNT=1` as an env var before starting Ray. Set the env var: `RAY_DISABLE_DOCKER_CPU_WARNING=1` to mute this warning.
2024-04-29 17:22:30,005 INFO worker.py:1749 -- Started a local Ray instance.
INFO 04-29 17:22:30 llm_engine.py:98] Initializing an LLM engine (v0.4.1) with config: model='mistralai/Mistral-7B-Instruct-v0.2', speculative_config=None, tokenizer='mistralai/Mistral-7B-Instruct-v0.2', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0)
INFO 04-29 17:22:34 utils.py:608] Found nccl from library /root/.config/vllm/nccl/cu12/libnccl.so.2.18.1
(RayWorkerWrapper pid=1019) INFO 04-29 17:22:34 utils.py:608] Found nccl from library /root/.config/vllm/nccl/cu12/libnccl.so.2.18.1
INFO 04-29 17:22:35 selector.py:28] Using FlashAttention backend.
(RayWorkerWrapper pid=1019) INFO 04-29 17:22:35 selector.py:28] Using FlashAttention backend.
INFO 04-29 17:22:37 pynccl_utils.py:43] vLLM is using nccl==2.18.1
(RayWorkerWrapper pid=1019) INFO 04-29 17:22:37 pynccl_utils.py:43] vLLM is using nccl==2.18.1
INFO 04-29 17:22:42 utils.py:115] generating GPU P2P access cache for in /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
INFO 04-29 17:22:42 utils.py:129] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
(RayWorkerWrapper pid=1019) INFO 04-29 17:22:42 utils.py:129] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1.json
INFO 04-29 17:22:43 weight_utils.py:193] Using model weights format ['*.safetensors']
(RayWorkerWrapper pid=1019) INFO 04-29 17:22:43 weight_utils.py:193] Using model weights format ['*.safetensors']
INFO 04-29 17:24:39 model_runner.py:173] Loading model weights took 6.7544 GB
(RayWorkerWrapper pid=1019) INFO 04-29 17:24:39 model_runner.py:173] Loading model weights took 6.7544 GB
INFO 04-29 17:24:41 ray_gpu_executor.py:217] # GPU blocks: 60373, # CPU blocks: 4096
(RayWorkerWrapper pid=1019) INFO 04-29 17:24:46 model_runner.py:976] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(RayWorkerWrapper pid=1019) INFO 04-29 17:24:46 model_runner.py:980] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 04-29 17:24:46 model_runner.py:976] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 04-29 17:24:46 model_runner.py:980] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 04-29 17:24:51 custom_all_reduce.py:246] Registering 2275 cuda graph addresses
INFO 04-29 17:24:51 model_runner.py:1057] Graph capturing finished in 5 secs.
(RayWorkerWrapper pid=1019) INFO 04-29 17:24:51 custom_all_reduce.py:246] Registering 2275 cuda graph addresses
(RayWorkerWrapper pid=1019) INFO 04-29 17:24:51 model_runner.py:1057] Graph capturing finished in 5 secs.
INFO 04-29 17:24:52 serving_chat.py:344] Using default chat template:
INFO 04-29 17:24:52 serving_chat.py:344] {{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Your current environment
How would you like to use vllm
I want to run inference on mistralai/Mixtral-8x7B-Instruct-v0.1 using the openAI Compatibility server.
python -m vllm.entrypoints.openai.api_server --model mistralai/Mistral-7B-Instruct-v0.2 --port 6370 --tensor-parallel-size 2
When I following the instructions here, the program freezes with the output
Output of nvidia-smi
When I use the smaller mistral model and set --tensor-parallel-size equal to 1, it works as intended
Update 1:
Got further with container version as seen in the comment below
Update 2:
Was able to successfully run V0.2.7
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable -v ~/.cache/huggingface:/root/.cache/huggingface --env "HUGGING_FACE_HUB_TOKEN=xxxxx" -p 6370:8000 --ipc=host vllm/vllm-openai:v0.2.7 --model mistralai/Mixtral-8x7B-Instruct-v0.1 --tensor-parallel-size 2
however V0.3.3 fails due to Google DNS issue ....
Update 3:
v0.4.0 is showing the error
Update 4:
v0.3.2 works as intended