Open SeungjaeLim opened 6 months ago
It would be better if you can give a full installation log via pip install -i http://ftp.daumkakao.com/pypi/simple --trusted-host ftp.daumkakao.com -v -e .
i.e. adding -v
flag .
Thanks for letting me know. But the problem now is that the pip install ends successfully, but when I import, I get the same problem as above. I will fix the tag as bug.
If your import fails, it is also in installation problem. We cannot help further without more details :(
我也遇到了一样的问题
When you install vllm using pip, you might frequently run into errors related to 'undefined symbol', often because of conflicting versions with pytorch or something.
It's safer to build vllm from the source. https://github.com/vllm-project/vllm/issues/129#issuecomment-1805088950
I built it from the source. still i get the undefined symbol
error (recent HEAD commit of main branch)
root@b0da1a1ca0fd:~# python3
Python 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 16:04:32) [GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import triton
>>> import vllm
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/vllm/vllm/__init__.py", line 4, in <module>
from vllm.engine.async_llm_engine import AsyncLLMEngine
File "/root/vllm/vllm/engine/async_llm_engine.py", line 12, in <module>
from vllm.engine.llm_engine import LLMEngine
File "/root/vllm/vllm/engine/llm_engine.py", line 16, in <module>
from vllm.model_executor.model_loader import get_architecture_class_name
File "/root/vllm/vllm/model_executor/model_loader.py", line 10, in <module>
from vllm.model_executor.models.llava import LlavaForConditionalGeneration
File "/root/vllm/vllm/model_executor/models/llava.py", line 11, in <module>
from vllm.model_executor.layers.activation import get_act_fn
File "/root/vllm/vllm/model_executor/layers/activation.py", line 9, in <module>
from vllm._C import ops
ImportError: /root/vllm/vllm/_C.cpython-310-powerpc64le-linux-gnu.so: undefined symbol: cuPointerGetAttribute
Let me know if any information is needed or if creating a new issue seems to be good
build the container with DOCKER_BUILDKIT=0, the probelm will be solved. (building needs gpu)
Your current environment
How you are installing vllm
I tried to import vllm,but it failed.