Closed Fanb1ing closed 1 month ago
Hi @Fanb1ing we don't support NVIDIA GPUs with sm_52 since it is quite old at this point. I think the oldest that the community has tried is Pascal aka sm_60 https://github.com/vllm-project/vllm/issues/963
Thanks very much for your help. However, I am still confused that the GPU I used are NVIDIA GeForce RTX 4090 and NVIDIA GeForce RTX 3090, their corresponding compute capability are 8.9 and 8.6. Where did the "sm_52" problem arise?
By the way, I have tried "pip install . " to install in a new conda environment with python=3.9. It didn't work. Here is the error:
`Building wheels for collected packages: vllm Building wheel for vllm (pyproject.toml) ... error error: subprocess-exited-with-error
× Building wheel for vllm (pyproject.toml) did not run successfully. │ exit code: 1 ?─> [299 lines of output] running bdist_wheel running build running build_py copying vllm/sampling_params.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/config.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/logger.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/init.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/utils.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/_custom_ops.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/outputs.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/envs.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/block.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/sequence.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/pooling_params.py -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/attention/selector.py -> build/lib.linux-x86_64-cpython-39/vllm/attention copying vllm/attention/layer.py -> build/lib.linux-x86_64-cpython-39/vllm/attention copying vllm/attention/init.py -> build/lib.linux-x86_64-cpython-39/vllm/attention copying vllm/core/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/init.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/evictor_v1.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/scheduler.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/embedding_model_block_manager.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/block_manager_v1.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/policy.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/evictor_v2.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/core/block_manager_v2.py -> build/lib.linux-x86_64-cpython-39/vllm/core copying vllm/worker/neuron_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/worker_base.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/cpu_model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/init.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/embedding_model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/cpu_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/neuron_model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/worker.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/worker/cache_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/worker copying vllm/model_executor/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor copying vllm/model_executor/pooling_metadata.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor copying vllm/model_executor/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor copying vllm/model_executor/sampling_metadata.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor copying vllm/usage/usage_lib.py -> build/lib.linux-x86_64-cpython-39/vllm/usage copying vllm/usage/init.py -> build/lib.linux-x86_64-cpython-39/vllm/usage copying vllm/spec_decode/util.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/ngram_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/metrics.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/init.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/top1_proposer.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/spec_decode_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/batch_expansion.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/spec_decode/multi_step_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode copying vllm/lora/punica.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/request.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/init.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/layers.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/worker_manager.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/fully_sharded_layers.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/lora.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/lora/models.py -> build/lib.linux-x86_64-cpython-39/vllm/lora copying vllm/distributed/communication_op.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed copying vllm/distributed/init.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed copying vllm/distributed/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed copying vllm/distributed/parallel_state.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed copying vllm/executor/multiproc_gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/distributed_gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/ray_gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/neuron_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/init.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/cpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/executor_base.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/ray_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/executor/multiproc_worker_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/executor copying vllm/logging/init.py -> build/lib.linux-x86_64-cpython-39/vllm/logging copying vllm/logging/formatter.py -> build/lib.linux-x86_64-cpython-39/vllm/logging copying vllm/entrypoints/api_server.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints copying vllm/entrypoints/init.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints copying vllm/entrypoints/llm.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints copying vllm/engine/metrics.py -> build/lib.linux-x86_64-cpython-39/vllm/engine copying vllm/engine/init.py -> build/lib.linux-x86_64-cpython-39/vllm/engine copying vllm/engine/async_llm_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/engine copying vllm/engine/arg_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/engine copying vllm/engine/llm_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/engine copying vllm/transformers_utils/config.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils copying vllm/transformers_utils/init.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils copying vllm/transformers_utils/detokenizer.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils copying vllm/transformers_utils/tokenizer.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils copying vllm/attention/ops/prefix_prefill.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops copying vllm/attention/ops/paged_attn.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops copying vllm/attention/ops/init.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops copying vllm/attention/ops/triton_flash_attention.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops copying vllm/attention/backends/xformers.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/attention/backends/torch_sdpa.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/attention/backends/rocm_flash_attn.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/attention/backends/flash_attn.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/attention/backends/init.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/attention/backends/flashinfer.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/attention/backends/abstract.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends copying vllm/core/block/cpu_gpu_block_allocator.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/core/block/prefix_caching_block.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/core/block/block_table.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/core/block/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/core/block/init.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/core/block/naive_block.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/core/block/common.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block copying vllm/model_executor/guided_decoding/outlines_logits_processors.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding copying vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding copying vllm/model_executor/guided_decoding/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding copying vllm/model_executor/guided_decoding/outlines_decoding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding copying vllm/model_executor/layers/vocab_parallel_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/logits_processor.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/rotary_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/activation.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/sampler.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/rejection_sampler.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/layernorm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/pooler.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/layers/linear.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers copying vllm/model_executor/model_loader/weight_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader copying vllm/model_executor/model_loader/loader.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader copying vllm/model_executor/model_loader/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader copying vllm/model_executor/model_loader/neuron.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader copying vllm/model_executor/model_loader/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader copying vllm/model_executor/model_loader/tensorizer.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader copying vllm/model_executor/models/qwen2_moe.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/qwen2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/stablelm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/phi.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/llava.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/internlm2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/olmo.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/bloom.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/orion.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/qwen.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/mpt.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/mixtral.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/jais.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/deepseek.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/gemma.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/xverse.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/gpt_bigcode.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/dbrx.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/gpt2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/gpt_neox.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/mixtral_quant.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/decilm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/arctic.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/baichuan.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/falcon.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/llama_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/vlm_base.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/gpt_j.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/commandr.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/minicpm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/starcoder2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/opt.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/llama.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/models/chatglm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models copying vllm/model_executor/layers/fused_moe/fused_moe.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe copying vllm/model_executor/layers/fused_moe/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe copying vllm/model_executor/layers/ops/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops copying vllm/model_executor/layers/ops/sample.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops copying vllm/model_executor/layers/ops/rand.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops copying vllm/model_executor/layers/quantization/schema.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/gptq_marlin.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/fp8.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/squeezellm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/gptq_marlin_24.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/base_config.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/aqlm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/awq.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/marlin.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/gptq.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/deepspeedfp.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization copying vllm/model_executor/layers/quantization/utils/marlin_perms.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils copying vllm/model_executor/layers/quantization/utils/init.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils copying vllm/model_executor/layers/quantization/utils/format_24.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils copying vllm/model_executor/layers/quantization/utils/marlin_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils copying vllm/model_executor/layers/quantization/utils/marlin_24_perms.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils copying vllm/model_executor/layers/quantization/utils/quant_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils copying vllm/distributed/device_communicators/init.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators copying vllm/distributed/device_communicators/custom_all_reduce.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators copying vllm/distributed/device_communicators/pynccl.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators copying vllm/distributed/device_communicators/pynccl_wrapper.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators copying vllm/entrypoints/openai/serving_chat.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/serving_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/api_server.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/init.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/run_batch.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/serving_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/serving_completion.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/protocol.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/entrypoints/openai/cli_args.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai copying vllm/engine/output_processor/util.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor copying vllm/engine/output_processor/multi_step.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor copying vllm/engine/output_processor/single_step.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor copying vllm/engine/output_processor/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor copying vllm/engine/output_processor/init.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor copying vllm/engine/output_processor/stop_checker.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor copying vllm/transformers_utils/configs/mpt.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/configs/jais.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/configs/dbrx.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/configs/init.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/configs/arctic.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/configs/falcon.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/configs/chatglm.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs copying vllm/transformers_utils/tokenizer_group/init.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group copying vllm/transformers_utils/tokenizer_group/tokenizer_group.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group copying vllm/transformers_utils/tokenizer_group/ray_tokenizer_group.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group copying vllm/transformers_utils/tokenizer_group/base_tokenizer_group.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group copying vllm/transformers_utils/tokenizers/init.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizers copying vllm/transformers_utils/tokenizers/baichuan.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizers copying vllm/py.typed -> build/lib.linux-x86_64-cpython-39/vllm copying vllm/model_executor/layers/fused_moe/configs/E=8,N=2048,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=1792,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=4096,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=16,N=1344,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_A100-SXM4-40GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=16,N=2688,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_H100_80GB_HBM3,dtype=float8.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=16,N=2688,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=1792,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=1792,device_name=NVIDIA_A100-SXM4-40GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=16,N=1344,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=16,N=1344,device_name=NVIDIA_A100-SXM4-40GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_H100_80GB_HBM3,dtype=float8.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=2048,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=4096,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs copying vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs running build_ext CMake Error at CMakeLists.txt:3 (project): Running
'/tmp/pip-build-env-_st73erw/overlay/bin/ninja' '--version'
failed with:
no such file or directory
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "/data2/fanbingbing/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/data2/fanbingbing/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/data2/fanbingbing/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 410, in build_wheel
return self._build_with_temp_dir(
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 395, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 401, in <module>
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 104, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 184, in setup
return run_commands(dist)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 200, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 368, in run
self.run_command("build")
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 91, in run
_build_ext.run(self)
File "/tmp/pip-build-env-rgurjuq7/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "<string>", line 192, in build_extensions
File "<string>", line 175, in configure
File "/data2/fanbingbing/.conda/envs/vllm-embedding/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/data2/fanbingbing/Segregation/LLaMA-embedding/vllm', '-G', 'Ninja', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/data2/fanbingbing/Segregation/LLaMA-embedding/vllm/build/lib.linux-x86_64-cpython-39/vllm', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY=build/temp.linux-x86_64-cpython-39', '-DVLLM_TARGET_DEVICE=cuda', '-DVLLM_PYTHON_EXECUTABLE=/data2/fanbingbing/.conda/envs/vllm-embedding/bin/python', '-DNVCC_THREADS=1', '-DCMAKE_JOB_POOL_COMPILE:STRING=compile', '-DCMAKE_JOB_POOLS:STRING=compile=128']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for vllm Failed to build vllm ERROR: Could not build wheels for vllm, which is required to install pyproject.toml-based projects`
I assumed an old GPU because of this section of your original log
#$ gcc -D__CUDA_ARCH__=520 -D__CUDA_ARCH_LIST__=520 -E -x c++
"tmp/CMakeCUDACompilerId.cudafe1.gpu" "tmp/CMakeCUDACompilerId.cpp1.ii" -o
"tmp/CMakeCUDACompilerId.ptx"
#$ ptxas -arch=sm_52 -m64 "tmp/CMakeCUDACompilerId.ptx" -o
"tmp/CMakeCUDACompilerId.sm_52.cubin"
ptxas tmp/CMakeCUDACompilerId.ptx, line 9; fatal : Unsupported .version
7.5; current version is '6.3'
ptxas fatal : Ptx assembly aborted due to errors
Seeing your system setup with CUDA 12.4 and RTX 3090/4090 now seems like something that should be easily supported. I'm not sure what is going wrong here, but given it seems the build process is finding CUDA 11.x binaries, I would carefully check your paths to see if you have multiple versions of CUDA installed and check the version of nvcc
+ other CUDA utils
'/tmp/pip-build-env-_st73erw/overlay/bin/ninja' '--version' failed with: no such file or directory
The cmake command somehow remembers the old build directory /tmp/pip-build-env-_st73erw
, while the current build directory is /tmp/pip-build-env-rgurjuq7
.
You have a dirty build, and need to clean up previous build before you build again.
Really Really Thanks to you all !!! Today I tried again in a new environment and re-download git code to avoid dirty build problem. I actually found that there are many different version of CUDA on the server. I set the CUDA-12.1 into PATH,by
export CUDA_HOME=/usr/local/cuda-12.1
export PATH="${CUDA_HOME}/bin:$PATH"
and Then I ran pip install .
I got a new error and it shows "nvcc fatal : Unsupported gpu architecture 'compute_89'". I am so sorry that I can't figure out what wrong with the version. T^T
Building wheel for vllm (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for vllm (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [390 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-39
creating build/lib.linux-x86_64-cpython-39/vllm
copying vllm/sampling_params.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/config.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/logger.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/utils.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/_custom_ops.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/outputs.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/envs.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/block.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/sequence.py -> build/lib.linux-x86_64-cpython-39/vllm
copying vllm/pooling_params.py -> build/lib.linux-x86_64-cpython-39/vllm
creating build/lib.linux-x86_64-cpython-39/vllm/attention
copying vllm/attention/selector.py -> build/lib.linux-x86_64-cpython-39/vllm/attention
copying vllm/attention/layer.py -> build/lib.linux-x86_64-cpython-39/vllm/attention
copying vllm/attention/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/attention
creating build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/evictor_v1.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/scheduler.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/embedding_model_block_manager.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/block_manager_v1.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/policy.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/evictor_v2.py -> build/lib.linux-x86_64-cpython-39/vllm/core
copying vllm/core/block_manager_v2.py -> build/lib.linux-x86_64-cpython-39/vllm/core
creating build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/neuron_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/worker_base.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/cpu_model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/embedding_model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/cpu_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/neuron_model_runner.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/worker.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
copying vllm/worker/cache_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/worker
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor
copying vllm/model_executor/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor
copying vllm/model_executor/pooling_metadata.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor
copying vllm/model_executor/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor
copying vllm/model_executor/sampling_metadata.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor
creating build/lib.linux-x86_64-cpython-39/vllm/usage
copying vllm/usage/usage_lib.py -> build/lib.linux-x86_64-cpython-39/vllm/usage
copying vllm/usage/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/usage
creating build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/util.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/ngram_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/metrics.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/top1_proposer.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/spec_decode_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/batch_expansion.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
copying vllm/spec_decode/multi_step_worker.py -> build/lib.linux-x86_64-cpython-39/vllm/spec_decode
creating build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/punica.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/request.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/layers.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/worker_manager.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/fully_sharded_layers.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/lora.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
copying vllm/lora/models.py -> build/lib.linux-x86_64-cpython-39/vllm/lora
creating build/lib.linux-x86_64-cpython-39/vllm/distributed
copying vllm/distributed/communication_op.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed
copying vllm/distributed/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed
copying vllm/distributed/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed
copying vllm/distributed/parallel_state.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed
creating build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/multiproc_gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/distributed_gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/ray_gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/neuron_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/gpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/cpu_executor.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/executor_base.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/ray_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
copying vllm/executor/multiproc_worker_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/executor
creating build/lib.linux-x86_64-cpython-39/vllm/logging
copying vllm/logging/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/logging
copying vllm/logging/formatter.py -> build/lib.linux-x86_64-cpython-39/vllm/logging
creating build/lib.linux-x86_64-cpython-39/vllm/entrypoints
copying vllm/entrypoints/api_server.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints
copying vllm/entrypoints/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints
copying vllm/entrypoints/llm.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints
creating build/lib.linux-x86_64-cpython-39/vllm/engine
copying vllm/engine/metrics.py -> build/lib.linux-x86_64-cpython-39/vllm/engine
copying vllm/engine/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/engine
copying vllm/engine/async_llm_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/engine
copying vllm/engine/arg_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/engine
copying vllm/engine/llm_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/engine
creating build/lib.linux-x86_64-cpython-39/vllm/transformers_utils
copying vllm/transformers_utils/config.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils
copying vllm/transformers_utils/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils
copying vllm/transformers_utils/detokenizer.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils
copying vllm/transformers_utils/tokenizer.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils
creating build/lib.linux-x86_64-cpython-39/vllm/attention/ops
copying vllm/attention/ops/prefix_prefill.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops
copying vllm/attention/ops/paged_attn.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops
copying vllm/attention/ops/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops
copying vllm/attention/ops/triton_flash_attention.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/ops
creating build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/xformers.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/torch_sdpa.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/rocm_flash_attn.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/flash_attn.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/flashinfer.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
copying vllm/attention/backends/abstract.py -> build/lib.linux-x86_64-cpython-39/vllm/attention/backends
creating build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/cpu_gpu_block_allocator.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/prefix_caching_block.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/block_table.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/naive_block.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
copying vllm/core/block/common.py -> build/lib.linux-x86_64-cpython-39/vllm/core/block
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding
copying vllm/model_executor/guided_decoding/outlines_logits_processors.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding
copying vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding
copying vllm/model_executor/guided_decoding/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding
copying vllm/model_executor/guided_decoding/outlines_decoding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/guided_decoding
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/vocab_parallel_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/logits_processor.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/rotary_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/activation.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/sampler.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/rejection_sampler.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/layernorm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/pooler.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
copying vllm/model_executor/layers/linear.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
copying vllm/model_executor/model_loader/weight_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
copying vllm/model_executor/model_loader/loader.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
copying vllm/model_executor/model_loader/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
copying vllm/model_executor/model_loader/neuron.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
copying vllm/model_executor/model_loader/utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
copying vllm/model_executor/model_loader/tensorizer.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/model_loader
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/qwen2_moe.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/qwen2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/stablelm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/phi.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/llava.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/internlm2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/olmo.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/bloom.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/orion.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/qwen.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/mpt.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/mixtral.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/jais.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/deepseek.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/gemma.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/xverse.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/gpt_bigcode.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/dbrx.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/gpt2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/gpt_neox.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/mixtral_quant.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/decilm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/arctic.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/baichuan.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/falcon.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/llama_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/vlm_base.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/gpt_j.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/commandr.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/minicpm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/starcoder2.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/opt.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/llama.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
copying vllm/model_executor/models/chatglm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/models
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe
copying vllm/model_executor/layers/fused_moe/fused_moe.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe
copying vllm/model_executor/layers/fused_moe/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops
copying vllm/model_executor/layers/ops/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops
copying vllm/model_executor/layers/ops/sample.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops
copying vllm/model_executor/layers/ops/rand.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/ops
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/schema.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/gptq_marlin.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/fp8.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/squeezellm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/gptq_marlin_24.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/base_config.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/aqlm.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/awq.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/marlin.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/gptq.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
copying vllm/model_executor/layers/quantization/deepspeedfp.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
copying vllm/model_executor/layers/quantization/utils/marlin_perms.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
copying vllm/model_executor/layers/quantization/utils/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
copying vllm/model_executor/layers/quantization/utils/format_24.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
copying vllm/model_executor/layers/quantization/utils/marlin_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
copying vllm/model_executor/layers/quantization/utils/marlin_24_perms.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
copying vllm/model_executor/layers/quantization/utils/quant_utils.py -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/quantization/utils
creating build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators
copying vllm/distributed/device_communicators/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators
copying vllm/distributed/device_communicators/custom_all_reduce.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators
copying vllm/distributed/device_communicators/pynccl.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators
copying vllm/distributed/device_communicators/pynccl_wrapper.py -> build/lib.linux-x86_64-cpython-39/vllm/distributed/device_communicators
creating build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/serving_chat.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/serving_engine.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/api_server.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/run_batch.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/serving_embedding.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/serving_completion.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/protocol.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
copying vllm/entrypoints/openai/cli_args.py -> build/lib.linux-x86_64-cpython-39/vllm/entrypoints/openai
creating build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
copying vllm/engine/output_processor/util.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
copying vllm/engine/output_processor/multi_step.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
copying vllm/engine/output_processor/single_step.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
copying vllm/engine/output_processor/interfaces.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
copying vllm/engine/output_processor/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
copying vllm/engine/output_processor/stop_checker.py -> build/lib.linux-x86_64-cpython-39/vllm/engine/output_processor
creating build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/mpt.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/jais.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/dbrx.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/arctic.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/falcon.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
copying vllm/transformers_utils/configs/chatglm.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/configs
creating build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group
copying vllm/transformers_utils/tokenizer_group/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group
copying vllm/transformers_utils/tokenizer_group/tokenizer_group.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group
copying vllm/transformers_utils/tokenizer_group/ray_tokenizer_group.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group
copying vllm/transformers_utils/tokenizer_group/base_tokenizer_group.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizer_group
creating build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizers
copying vllm/transformers_utils/tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizers
copying vllm/transformers_utils/tokenizers/baichuan.py -> build/lib.linux-x86_64-cpython-39/vllm/transformers_utils/tokenizers
copying vllm/py.typed -> build/lib.linux-x86_64-cpython-39/vllm
creating build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=2048,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=1792,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=4096,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=16,N=1344,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_A100-SXM4-40GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=16,N=2688,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_H100_80GB_HBM3,dtype=float8.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=16,N=2688,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=1792,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=1792,device_name=NVIDIA_A100-SXM4-40GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=16,N=1344,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=16,N=1344,device_name=NVIDIA_A100-SXM4-40GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=3584,device_name=NVIDIA_H100_80GB_HBM3,dtype=float8.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=2048,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=4096,device_name=NVIDIA_A100-SXM4-80GB.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=7168,device_name=NVIDIA_H100_80GB_HBM3.json -> build/lib.linux-x86_64-cpython-39/vllm/model_executor/layers/fused_moe/configs
running build_ext
-- The CXX compiler identification is GNU 11.4.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build type: RelWithDebInfo
-- Target device: cuda
-- Found Python: /usr/local/anaconda3/bin/python (found version "3.9.12") found components: Interpreter Development.Module
-- Found python matching: /usr/local/anaconda3/bin/python.
-- Found CUDA: /usr/local/cuda-12.1 (found version "12.1")
-- The CUDA compiler identification is NVIDIA 11.5.119
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/include (found version "11.5.119")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Caffe2: CUDA detected: 12.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda-12.1/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-12.1
-- Caffe2: Header version is: 12.1
-- /usr/local/cuda-12.1/lib64/libnvrtc.so shorthash is d540eb83
-- USE_CUDNN is set to 0. Compiling without cuDNN support
-- USE_CUSPARSELT is set to 0. Compiling without cuSPARSELt support
-- Autodetected CUDA architecture(s): 8.6 8.6 8.9 8.9 8.9 8.9
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_89,code=sm_89
CMake Warning at /tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:127 (append_torchlib_if_found)
CMakeLists.txt:67 (find_package)
-- Found Torch: /tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/torch/lib/libtorch.so
-- CUDA supported arches: 7.0;7.5;8.0;8.6;8.9;9.0
-- CUDA target arches: 86-real;89-real
-- CMake Version: 3.29.3
-- CUTLASS 3.5.0
-- CUDART: /usr/local/cuda-12.1/lib64/libcudart.so
-- CUDA Driver: /usr/local/cuda-12.1/lib64/stubs/libcuda.so
-- NVRTC: /usr/local/cuda-12.1/lib64/libnvrtc.so
-- Default Install Location: install
-- Found Python3: /data2/fanbingbing/.conda/envs/embedding-0522/bin/python3.9 (found suitable version "3.9.19", minimum required is "3.5") found components: Interpreter
-- CUDA Compilation Architectures: 70;72;75;80;86;87;89;90;90a
-- Enable caching of reference results in conv unit tests
-- Enable rigorous conv problem sizes in conv unit tests
-- Using NVCC flags: --expt-relaxed-constexpr;-DCUTLASS_TEST_LEVEL=0;-DCUTLASS_TEST_ENABLE_CACHED_RESULTS=1;-DCUTLASS_CONV_UNIT_TEST_RIGOROUS_SIZE_ENABLED=1;-DCUTLASS_DEBUG_TRACE_LEVEL=0;-Xcompiler=-Wconversion;-Xcompiler=-fno-strict-aliasing;-lineinfo
-- CUTLASS Revision: 5f6d10c1
-- Configuring cublas ...
-- cuBLAS Disabled.
-- Configuring cuBLAS ... done.
-- Completed generation of library instances. See /data2/fanbingbing/Segregation/LLaMA-embedding/vllm/build/temp.linux-x86_64-cpython-39/_deps/cutlass-build/tools/library/library_instance_generation.log for more information.
-- Punica target arches: 86-real;89-real
-- Enabling C extension.
-- Enabling moe extension.
-- Configuring done (17.8s)
-- Generating done (0.6s)
-- Build files have been written to: /data2/fanbingbing/Segregation/LLaMA-embedding/vllm/build/temp.linux-x86_64-cpython-39
[0/2] Re-checking globbed directories...
[1/3] Building CUDA object CMakeFiles/_moe_C.dir/csrc/moe/topk_softmax_kernels.cu.o
FAILED: CMakeFiles/_moe_C.dir/csrc/moe/topk_softmax_kernels.cu.o
/usr/bin/nvcc -forward-unknown-to-host-compiler -DTORCH_EXTENSION_NAME=_moe_C -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_moe_C_EXPORTS -I/data2/fanbingbing/Segregation/LLaMA-embedding/vllm/csrc -isystem /usr/local/anaconda3/include/python3.9 -isystem /tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/torch/include -isystem /tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -DONNX_NAMESPACE=onnx_c2 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -O2 -g -DNDEBUG -std=c++17 "--generate-code=arch=compute_86,code=[sm_86]" "--generate-code=arch=compute_89,code=[sm_89]" -Xcompiler=-fPIC --expt-relaxed-constexpr -DENABLE_FP8 --threads=1 -D_GLIBCXX_USE_CXX11_ABI=0 -MD -MT CMakeFiles/_moe_C.dir/csrc/moe/topk_softmax_kernels.cu.o -MF CMakeFiles/_moe_C.dir/csrc/moe/topk_softmax_kernels.cu.o.d -x cu -c /data2/fanbingbing/Segregation/LLaMA-embedding/vllm/csrc/moe/topk_softmax_kernels.cu -o CMakeFiles/_moe_C.dir/csrc/moe/topk_softmax_kernels.cu.o
nvcc fatal : Unsupported gpu architecture 'compute_89'
[2/3] Building CXX object CMakeFiles/_moe_C.dir/csrc/moe/moe_ops.cpp.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/data2/fanbingbing/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/data2/fanbingbing/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/data2/fanbingbing/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 410, in build_wheel
return self._build_with_temp_dir(
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 395, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 401, in <module>
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 184, in setup
return run_commands(dist)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 200, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 968, in run_command
super().run_command(command)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 368, in run
self.run_command("build")
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 968, in run_command
super().run_command(command)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 968, in run_command
super().run_command(command)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 91, in run
_build_ext.run(self)
File "/tmp/pip-build-env-5vq6sgdl/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "<string>", line 202, in build_extensions
File "/usr/local/anaconda3/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', '_moe_C', '-j', '128']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for vllm
Failed to build vllm
ERROR: Could not build wheels for vllm, which is required to install pyproject.toml-based projects
-- Found CUDAToolkit: /usr/include (found version "11.5.119")
Your cuda environment is too complicated. It somehow finds the 11.5 version.
@Fanb1ing I think this bug is related to this commit: [Kernel] Add w8a8 CUTLASS kernels
It build success when I reset the main branch to previous commit [Misc] remove old comments
I guess Cutlass may have some conflict with cuda version or gpu driver version, which needs more analysis. @youkaichao Do you have time to look at this issue?
I don't know how cutlass kernel affects this. You need to clean up the environment until it can finds the right cuda.
My test environment only have one cuda version. The base docker image is nvcr.io/nvidia/cuda:11.8.0-devel-centos7
nvidia-smi
+-----------------------------------------------------------------------------+
NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0
and
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
Thank to all you help. I carefully checked different version of CUDA. It seems that the version of CXX compiler and CUDA compiler are still old, which may be caused by multiple versions downloaded on the server. Unfortunately, I can't easily delete the old version because it will affect other users. This web may provide a possible solution which I will try in future: https://blog.kovalevskyi.com/multiple-version-of-cuda-libraries-on-the-same-machine-b9502d50ae77
My test environment only have one cuda version. The base docker image is nvcr.io/nvidia/cuda:11.8.0-devel-centos7
nvidia-smi +-----------------------------------------------------------------------------+ NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0
and
nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:33:58_PDT_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0
My test environment only have one cuda version. The base docker image is nvcr.io/nvidia/cuda:11.8.0-devel-centos7
nvidia-smi +-----------------------------------------------------------------------------+ NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0
and
nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:33:58_PDT_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0
@youkaichao I solved this issue by upgrading to CUDA version 12.4. Therefore, my guess seems correct: some of the new features in the main branch may not be compatible with CUDA 11.8.
It looks like the arch is explicitly set to be compute_90a here https://github.com/vllm-project/vllm/commit/2060e93659f1f63a3d2a76aee61559ccb1fe732e#diff-1e7de1ae2d059d21e1dd75d5812d5a34b0222cef273b7c3a2af62eb747f9d20aR206 . Does cuda 11.8 support compute_90a?
Here's the error message
3821.5 /usr/local/cuda-11.8/bin/nvcc -forward-unknown-to-host-compiler -DTORCH_EXTENSION_NAME=_C -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_C_EXPORTS -I/tmp/vllm/csrc -I/tmp/vllm/build/temp.linux-x86_64-cpython-310/_deps/cutlass-src/include -I/tmp/vllm/build/temp.linux-x86_64-cpython-310/_deps/cutlass-src/tools/util/include -isystem /usr /local/include/python3.10 -isystem /tmp/pip-build-env-qkfji_82/overlay/lib/python3.10/site-packages/torch/include -isystem /tmp/pip-build-env-qkfji_82/overlay/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-11.8/include -DONNX_NAMESPACE=onnx_c2 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag _suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -O2 -g -DNDEBUG -std=c++17 "--generate-code=arch=compute_80,code=[sm_80]" "--generate-code=arch=compute_86,code=[sm_86]" "--generate-code=arch=c ompute_89,code=[sm_89]" -Xcompiler=-fPIC -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DENABLE_FP8 --threads=1 -D_GLIBCXX_USE_CXX11_ABI=0 -gencode arch=compute_90a,code=sm_90a -MD -MT CMakeFiles/_C.dir/csrc/quantization/cutlass_w8a8/scaled_mm_dq_c3x.cu.o -MF CMakeFiles/_C.dir/ csrc/quantization/cutlass_w8a8/scaled_mm_dq_c3x.cu.o.d -x cu -c /tmp/vllm/csrc/quantization/cutlass_w8a8/scaled_mm_dq_c3x.cu -o CMakeFiles/_C.dir/csrc/quantization/cutlass_w8a8/scaled_mm_dq_c3x.cu.o 3821.5 nvcc fatal : Unsupported gpu architecture 'compute_90a'
I had this issue as well trying to build on a GCP VM with L4 on cuda 11.8. I had to revert before the above mentioned commit to v0.4.2
.
Yeah it probably doesn't make sense to try to compile that using 11.8 since it'll fail. Can that be fixed by conditioning on the cuda version?
Closing as the problem has been solved for OP.
Your current environment
How you are installing vllm
''' Building wheels for collected packages: vllm Building editable for vllm (pyproject.toml) ... error error: subprocess-exited-with-error
× Building editable for vllm (pyproject.toml) did not run successfully. │ exit code: 1 ╰─ CMake Error at /tmp/pip-build-env-l6d1bzk0/overlay/lib/python3.11/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:814 (message): Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed.
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building editable for vllm Failed to build vllm ERROR: Could not build wheels for vllm, which is required to install pyproject.toml-based projects '''