vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.81k stars 4.11k forks source link

[Usage]: Experiencing weird import bugs and errors after installing with pip install -e . #4554

Open KevinCL16 opened 5 months ago

KevinCL16 commented 5 months ago

Your current environment

Traceback (most recent call last):
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 721, in <module>
    main()
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 700, in main
    output = get_pretty_env_info()
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 695, in get_pretty_env_info
    return pretty_str(get_env_info())
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 532, in get_env_info
    vllm_version = get_vllm_version()
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 264, in get_vllm_version
    return vllm.__version__
AttributeError: module 'vllm' has no attribute '__version__'

How would you like to use vllm

594 Like in the previous issue, I tried to install from the repo using pip install -e . and had trouble importing LLM.


Traceback (most recent call last):
File "", line 1, in
ImportError: cannot import name 'LLM' from 'vllm' (unknown location)

I got around this issue by using:

尝试一下使用 from vllm.entrypoints.llm import LLM from vllm.sampling_params import SamplingParams

However, I ran into another error:

    self.llm_engine = LLMEngine.from_engine_args(
  File "/home/yangzhiyu/workspace/open-long-agent/vllm/vllm/engine/llm_engine.py", line 291, in from_engine_args
    engine = cls(
  File "/home/yangzhiyu/workspace/open-long-agent/vllm/vllm/engine/llm_engine.py", line 110, in __init__
    vllm.__version__,
AttributeError: module 'vllm' has no attribute '__version__'

I wonder if using pip install -e . is bugged?

Deok-min commented 5 months ago

I met the same error. I build the container based on nvcr.io/nvidia/pytorch:24.04-py3 docker image and install xformers from source code to maintain torch version. (Otherwise, it made a torch version conflict for me.) After build from source with "pip install -e ." command, I tried

python3 -m vllm.entrypoints.api_server ....

Then, below error happens.

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 25, in <module>
    from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
  File "/workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 19, in <module>
    from vllm.model_executor.guided_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/__init__.py", line 5, in <module>
    from vllm.model_executor.guided_decoding.lm_format_enforcer_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py", line 8, in <module>
    from lmformatenforcer.integrations.vllm import (
  File "/usr/local/lib/python3.10/dist-packages/lmformatenforcer/integrations/vllm.py", line 34, in <module>
    def build_vllm_token_enforcer_tokenizer_data(tokenizer: Union[vllm.LLM, PreTrainedTokenizerBase]) -> TokenEnforcerTokenizerData:
AttributeError: module 'vllm' has no attribute 'LLM'
Deok-min commented 5 months ago

I changed the root directory name. And it solved the problem.

GARRYHU commented 5 months ago

I have encountered the same problem as you @KevinCL16 . It might be because you have placed the Python file for running the model (assuming it's called run.py) and the vllm repository folder in the same directory. Even if you have run pip install -e . to install the vllm package into site-packages, run.py will still try to import modules from the current directory's vllm/ first (whereas the modules are actually in vllm/vllm/), causing the import to fail. Try moving run.py to a different location and then try from vllm import LLM, SamplingParams, perhaps you can solve this problem.

chg0901 commented 3 months ago

I met the same error. I build the container based on nvcr.io/nvidia/pytorch:24.04-py3 docker image and install xformers from source code to maintain torch version. (Otherwise, it made a torch version conflict for me.) After build from source with "pip install -e ." command, I tried

python3 -m vllm.entrypoints.api_server ....

Then, below error happens.

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 25, in <module>
    from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
  File "/workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 19, in <module>
    from vllm.model_executor.guided_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/__init__.py", line 5, in <module>
    from vllm.model_executor.guided_decoding.lm_format_enforcer_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py", line 8, in <module>
    from lmformatenforcer.integrations.vllm import (
  File "/usr/local/lib/python3.10/dist-packages/lmformatenforcer/integrations/vllm.py", line 34, in <module>
    def build_vllm_token_enforcer_tokenizer_data(tokenizer: Union[vllm.LLM, PreTrainedTokenizerBase]) -> TokenEnforcerTokenizerData:
AttributeError: module 'vllm' has no attribute 'LLM'

I encounted the same problem.

trislee02 commented 3 months ago

Please make sure the vllm folder is not at the directory you run the command. This way worked for me.