Closed jimburtoft closed 1 day ago
It looks like this might be related to PR-2671
I changed the lines around 171 in setup.py:
def _is_cuda() -> bool:
return (torch.version.cuda is not None) and not _is_neuron()
And then the process finished.
Keep in mind, on my system, I don't have cuda installed.
Sorry for the inconvenience, we need to sure neuron-ls
and neuronx-cc
commands are correctly installed in the system, before pip install vllm.
I have the same problem. I'm trying to create a docker image which I then can deploy on aws inf2 instances. However I don't have access to a neuron instance during the build process of the docker image. I have installed neuron-ls
and neuronx-cc
in the docker image, but of cause when neuron-ls
is executed during the docker build process there is no neuron device available.
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!
Your current environment
The script failed in my virtual environment, so I ran it outside
How you are installing vllm
Produces error: