vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.51k stars 4.43k forks source link

[Bug]: p2p check in custom all reduce not working #7588

Closed cjackal closed 2 months ago

cjackal commented 2 months ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe Nvidia driver version: 515.86.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 112 On-line CPU(s) list: 0-111 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 28 Socket(s): 2 Stepping: 6 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 5200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 2.6 MiB (56 instances) L1i cache: 1.8 MiB (56 instances) L2 cache: 70 MiB (56 instances) L3 cache: 84 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] torch==2.4.0+cu121 [pip3] torchvision==0.19.0+cu121 [pip3] triton==3.0.0 [conda] Could not collect ```

🐛 Describe the bug

It seems the GPU P2P check script in custom_all_reduce_utils.py has trouble finding the cuda runtime; after some boot-up logic, vLLM CLI fails shortly with the error message AssertionError: libcudart.so is not loaded in the current process.

It raises regardless of the GPU topology (whether NVX or PCIe, I mean) and vLLM server runs well with adding --enable-custom-all-reduce=False option or manually creating the ~/.cache/vllm/gpu_p2p_access_cache_for_0,1.json cache file, so I think the current library search method in gpu_p2p_access_check should be updated somehow.

The CLI arguments I used is as follow:

vllm serve /mnt/model-vol-1/model/META-LLAMA-3.1-70B-INSTRUCT/ --port 8080 --root-path /notebook/cjackal/cjackal-vscode/proxy/8080 --served-model-name meta-llama/meta-llama-3.1-70b-instruct --quantization fp8 --tensor-parallel-size 2

cf. there's another open issue about P2P check #3688 but it doesn't look relevant.

full vLLM CLI log and traceback ``` INFO 08-16 11:21:48 api_server.py:339] vLLM API server version 0.5.4 INFO 08-16 11:21:48 api_server.py:340] args: Namespace(model_tag='/mnt/model-vol-1/model/META-LLAMA-3.1-70B-INSTRUCT/', host=None, port=8080, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path='/notebook/cjackal/cjackal-vscode/proxy/8080', middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/mnt/model-vol-1/model/META-LLAMA-3.1-70B-INSTRUCT/', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization='fp8', rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['meta-llama/meta-llama-3.1-70b-instruct'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None, dispatch_function=) WARNING 08-16 11:21:48 config.py:1454] Casting torch.bfloat16 to torch.float16. INFO 08-16 11:21:48 config.py:729] Defaulting to use mp for distributed inference WARNING 08-16 11:21:48 arg_utils.py:766] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False. INFO 08-16 11:21:48 config.py:820] Chunked prefill is enabled with max_num_batched_tokens=512. INFO 08-16 11:21:48 llm_engine.py:174] Initializing an LLM engine (v0.5.4) with config: model='/mnt/model-vol-1/model/META-LLAMA-3.1-70B-INSTRUCT/', speculative_config=None, tokenizer='/mnt/model-vol-1/model/META-LLAMA-3.1-70B-INSTRUCT/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=meta-llama/meta-llama-3.1-70b-instruct, use_v2_block_manager=False, enable_prefix_caching=False) WARNING 08-16 11:21:48 multiproc_gpu_executor.py:59] Reducing Torch parallelism from 56 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. INFO 08-16 11:21:48 custom_cache_manager.py:17] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager (VllmWorkerProcess pid=2523349) INFO 08-16 11:21:48 multiproc_worker_utils.py:215] Worker ready; awaiting tasks INFO 08-16 11:21:49 utils.py:841] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=2523349) INFO 08-16 11:21:49 utils.py:841] Found nccl from library libnccl.so.2 INFO 08-16 11:21:49 pynccl.py:63] vLLM is using nccl==2.17.1 (VllmWorkerProcess pid=2523349) INFO 08-16 11:21:49 pynccl.py:63] vLLM is using nccl==2.17.1 INFO 08-16 11:21:49 custom_all_reduce_utils.py:203] generating GPU P2P access cache in /home/jovyan/.cache/vllm/gpu_p2p_access_cache_for_0,1.json Process Process-1: ERROR 08-16 11:22:00 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 2523349 died, exit code: -15 INFO 08-16 11:22:00 multiproc_worker_utils.py:123] Killing local vLLM worker processes Traceback (most recent call last): File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py", line 220, in gpu_p2p_access_check returned.check_returncode() File "/usr/lib/python3.10/subprocess.py", line 457, in check_returncode raise CalledProcessError(self.returncode, self.args, self.stdout, subprocess.CalledProcessError: Command '['/home/jovyan/git/vllm-serving/.venv/bin/python', '/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/entrypoints/openai/rpc/server.py", line 217, in run_rpc_server server = AsyncEngineRPCServer(async_engine_args, usage_context, port) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/entrypoints/openai/rpc/server.py", line 25, in __init__ self.engine = AsyncLLMEngine.from_engine_args(async_engine_args, File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 471, in from_engine_args engine = cls( File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 381, in __init__ self.engine = self._init_engine(*args, **kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 552, in _init_engine return engine_class(*args, **kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 249, in __init__ self.model_executor = executor_class( File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 215, in __init__ super().__init__(*args, **kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/executor/distributed_gpu_executor.py", line 25, in __init__ super().__init__(*args, **kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 47, in __init__ self._init_executor() File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 137, in _init_executor self._run_workers("init_device") File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 192, in _run_workers driver_worker_output = driver_worker_method(*args, **kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 132, in init_device init_worker_distributed_environment(self.parallel_config, self.rank, File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 348, in init_worker_distributed_environment ensure_model_parallel_initialized(parallel_config.tensor_parallel_size, File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 965, in ensure_model_parallel_initialized initialize_model_parallel(tensor_model_parallel_size, File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 931, in initialize_model_parallel _TP = init_model_parallel_group(group_ranks, File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 773, in init_model_parallel_group return GroupCoordinator( File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 164, in __init__ self.ca_comm = CustomAllreduce( File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce.py", line 126, in __init__ if not _can_p2p(rank, world_size): File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce.py", line 30, in _can_p2p if not gpu_p2p_access_check(rank, i): File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py", line 223, in gpu_p2p_access_check raise RuntimeError( RuntimeError: Error happened when batch testing peer-to-peer access from (0, 0, 1, 1) to (0, 1, 0, 1): Process SpawnProcess-2: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py", line 64, in consumer lib = CudaRTLibrary() File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_wrapper.py", line 103, in __init__ assert so_file is not None, \ AssertionError: libcudart.so is not loaded in the current process Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py", line 31, in producer lib = CudaRTLibrary() File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_wrapper.py", line 103, in __init__ assert so_file is not None, \ AssertionError: libcudart.so is not loaded in the current process Traceback (most recent call last): File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py", line 245, in result = can_actually_p2p(batch_src, batch_tgt) File "/home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/vllm/distributed/device_communicators/custom_all_reduce_utils.py", line 148, in can_actually_p2p assert p_src.exitcode == 0 and p_tgt.exitcode == 0 AssertionError ```
youkaichao commented 2 months ago

How do you install pytorch? Normal installation of pytorch should work. pytorch will load libcudart.so when you import it.

In addition, if you want to run it, you should add --disable-custom-all-reduce , we don't have --enable-custom-all-reduce flag.

cjackal commented 2 months ago

How do you install pytorch? Normal installation of pytorch should work. pytorch will load libcudart.so when you import it.

By pip install torch==2.4.0+cu121 --index-url https://download.pytorch.org/whl/cu121, nothing special.

In addition, if you want to run it, you should add --disable-custom-all-reduce , we don't have --enable-custom-all-reduce flag.

Right, my bad. Indeed I have checked that after adding --disable-custom-all-reduce option the vLLM server runs okay.

youkaichao commented 2 months ago

can you have a try:

import torch
from vllm.distributed.device_communicators.cuda_wrapper import find_loaded_library
print(find_loaded_library("libcudart.so"))

see what happens?

cjackal commented 2 months ago

can you have a try:

import torch
from vllm.distributed.device_communicators.cuda_wrapper import find_loaded_library
print(find_loaded_library("libcudart.so"))

see what happens?

it prints None. FYI, if I print the full /proc/self/maps, torch's libcudart is loaded from /home/jovyan/git/vllm-serving/.venv/lib/python3.10/site-packages/torch/lib/libcudart-9335f6a2.so.12.

cjackal commented 2 months ago

FYI: in my torch lib, there are 5 shared objects suffixed by hash:

Seems library search should be updated to use some sort of glob patterns?

youkaichao commented 2 months ago

libcudart-9335f6a2.so.12

interesting, I will update the glob patterns.

youkaichao commented 2 months ago

@cjackal can you try #7620 ?

cjackal commented 2 months ago

@youkaichao Thanks for prompt response, I have left a suggested change there in the PR. It works nicely indeed.