Closed exthirteen closed 2 months ago
运行 demo/function_call/function_calling.py 也会有类似的问题出现:
(minicpm3) root@feiye:/home/feiye/MiniCPM3-4B-service/MiniCPM-main/demo/function_call# CUDA_VISIBLE_DEVICES=0 python function_calling.py
WARNING 09-09 11:21:09 cuda.py:69] Detected different devices in the system:
WARNING 09-09 11:21:09 cuda.py:69] NVIDIA GeForce RTX 4090 D
WARNING 09-09 11:21:09 cuda.py:69] Tesla T4
WARNING 09-09 11:21:09 cuda.py:69] Please make sure to set CUDA_DEVICE_ORDER=PCI_BUS_ID
to avoid unexpected behavior.
INFO 09-09 11:21:10 llm_engine.py:213] Initializing an LLM engine (v0.6.0) with config: model='/home/feiye/Model/MiniCPM3-4B', speculative_config=None, tokenizer='/home/feiye/Model/MiniCPM3-4B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/home/feiye/Model/MiniCPM3-4B, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-09 11:21:10 model_runner.py:915] Starting to load model /home/feiye/Model/MiniCPM3-4B...
rank0: Traceback (most recent call last):
rank0: File "/home/feiye/MiniCPM3-4B-service/MiniCPM-main/demo/function_call/function_calling.py", line 80, in
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 177, in init rank0: self.llm_engine = LLMEngine.from_engine_args(
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 538, in from_engine_args rank0: engine = cls(
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 305, in init rank0: self.model_executor = executor_class(
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 47, in init
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/executor/gpu_executor.py", line 40, in _init_executor
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/worker/worker.py", line 182, in load_model
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 917, in load_model rank0: self.model = get_model(model_config=self.model_config,
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 19, in get_model rank0: return loader.load_model(model_config=model_config,
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 341, in load_model rank0: model = _initialize_model(model_config, self.load_config,
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 168, in _initialize_model rank0: modelclass, = get_model_architecture(model_config)
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/model_executor/model_loader/utils.py", line 31, in get_model_architecture rank0: return ModelRegistry.resolve_model_cls(architectures)
rank0: File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/model_executor/models/init.py", line 170, in resolve_model_cls rank0: raise ValueError( rank0: ValueError: Model architectures ['MiniCPM3ForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'DeepseekV2ForCausalLM', 'ExaoneForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'NemotronForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'PhiMoEForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'ArcticForCausalLM', 'XverseForCausalLM', 'Phi3SmallForCausalLM', 'MedusaModel', 'EAGLEModel', 'MLPSpeculatorPreTrainedModel', 'JambaForCausalLM', 'GraniteForCausalLM', 'MistralModel', 'Blip2ForConditionalGeneration', 'ChameleonForConditionalGeneration', 'FuyuForCausalLM', 'InternVLChatModel', 'LlavaForConditionalGeneration', 'LlavaNextForConditionalGeneration', 'MiniCPMV', 'PaliGemmaForConditionalGeneration', 'Phi3VForCausalLM', 'UltravoxModel', 'BartModel', 'BartForConditionalGeneration']
可以先尝试安装支持minicpm3的vllm
pip install git+https://github.com/OpenBMB/vllm.git@minicpm3
可以先尝试安装支持minicpm3的vllm
pip install git+https://github.com/OpenBMB/vllm.git@minicpm3
感谢
@Cppowboy 請教下您,我在安裝vllm的時候報錯了,不知該怎麼解決,錯誤代碼如下:
Building wheels for collected packages: vllm Building wheel for vllm (pyproject.toml) ... error error: subprocess-exited-with-error
× Building wheel for vllm (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [100 lines of output] fatal: not a git repository (or any of the parent directories): .git
Description / 描述
File "/root/anaconda3/envs/minicpm3/lib/python3.11/site-packages/vllm/model_executor/models/init.py", line 170, in resolve_model_cls raise ValueError( ValueError: Model architectures ['MiniCPM3ForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'DeepseekV2ForCausalLM', 'ExaoneForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'NemotronForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'PhiMoEForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'ArcticForCausalLM', 'XverseForCausalLM', 'Phi3SmallForCausalLM', 'MedusaModel', 'EAGLEModel', 'MLPSpeculatorPreTrainedModel', 'JambaForCausalLM', 'GraniteForCausalLM', 'MistralModel', 'Blip2ForConditionalGeneration', 'ChameleonForConditionalGeneration', 'FuyuForCausalLM', 'InternVLChatModel', 'LlavaForConditionalGeneration', 'LlavaNextForConditionalGeneration', 'MiniCPMV', 'PaliGemmaForConditionalGeneration', 'Phi3VForCausalLM', 'UltravoxModel', 'BartModel', 'BartForConditionalGeneration']
Case Explaination / 案例解释
No response