vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.96k stars 3.96k forks source link

[Bug]: Missing TextTokenPrompts class #8235

Open shubh9m opened 1 week ago

shubh9m commented 1 week ago

🐛 Describe the bug

I tried to pull a vllm docker image and run it on my 4060 GPU bit I am encountering this error: File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/opt/vllm/lib64/python3.11/site-packages/vllm_tgis_adapter/main.py", line 43, in from .grpc import run_grpc_server File "/opt/vllm/lib64/python3.11/site-packages/vllm_tgis_adapter/grpc/init.py", line 1, in from .grpc_server import run_grpc_server File "/opt/vllm/lib64/python3.11/site-packages/vllm_tgis_adapter/grpc/grpc_server.py", line 23, in from vllm.inputs import TextTokensPrompt ImportError: cannot import name 'TextTokensPrompt' from 'vllm.inputs' (/opt/vllm/lib64/python3.11/site-packages/vllm/inputs/init.py)

Before submitting a new issue...

DarkLight1337 commented 1 week ago

Which version of vLLM are you using? Make sure the docker image is up to date.

shubh9m commented 1 week ago

Sorry for the delayed response. The version of vLLM is same for both and the docker image is also up to date.

DarkLight1337 commented 1 week ago

Which docker image are you using? I see that the package name is vllm_tgis_adapter which isn't the usual one for vLLM.

shubh9m commented 1 week ago

I rechecked, the docker image has following vLLM version: 0.5.3.post1+cu124 and my system vLLM version is 0.6.0