Closed keeganNull closed 1 month ago
Can you show the full stack trace?
i met same problem
2024-10-17 22:43:29.392770: I tensorflow[/tsl/cuda/cudart_stub.cc:28](https://bore-prod-albertjchen-20241016114050-jupyterlab.coding.venus.woa.com/tsl/cuda/cudart_stub.cc#line=27)] Could not find cuda drivers on your machine, GPU will not be used.
2024-10-17 22:43:29.444466: I tensorflow[/core/platform/cpu_feature_guard.cc:182](https://bore-prod-albertjchen-20241016114050-jupyterlab.coding.venus.woa.com/core/platform/cpu_feature_guard.cc#line=181)] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-10-17 22:43:30.383760: W tensorflow[/compiler/tf2tensorrt/utils/py_utils.cc:38](https://bore-prod-albertjchen-20241016114050-jupyterlab.coding.venus.woa.com/compiler/tf2tensorrt/utils/py_utils.cc#line=37)] TF-TRT Warning: Could not find TensorRT
[INFO:swift] Successfully registered `[***])`
WARNING 10-17 22:43:33 cuda.py:22] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
[/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/vllm/connections.py:8](**/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/vllm/connections.py#line=7): RuntimeWarning: Failed to read commit hash:
No module named 'vllm._version'
from vllm.version import __version__ as VLLM_VERSION
[ERROR:swift] import vllm_utils error: Invalid version: 'dev'
[INFO:swift] No LMDeploy installed, if you are using LMDeploy, you will get `ImportError: cannot import name 'prepare_lmdeploy_engine_template' from 'swift.llm'`
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 5
2 os.environ['CUDA_VISIBLE_DEVICES'] = '0'
3 from tqdm import tqdm
----> 5 from swift.llm import (
6 ModelType, get_vllm_engine, get_default_template_type,
7 get_template, inference_vllm
8 )
9 from swift.utils import seed_everything
10 import torch
ImportError: cannot import name 'get_vllm_engine' from 'swift.llm'
orch==2.4.0
torchaudio==2.4.0+cu121
torchvision==0.19.0+cu121
tornado==6.4.1
tqdm==4.66.5
traitlets==5.14.3
transformers==4.45.2
transformers-stream-generator==0.0.5
triton==3.0.0
trl==0.11.4
typer==0.12.5
typing_extensions==4.12.2
tyro==0.8.12
tzdata==2024.2
urllib3==2.2.3
urwid==2.6.15
uvicorn==0.32.0
uvloop==0.21.0
venus-api-all==1.3.26
venus-api-base==1.3.26
venus-boot==1.3.26
venus-flow==1.3.26
venus-flow-operator==1.3.26
venus-ml==1.3.26
venus-sdk==1.3.26
venus-tools==1.3.26
vllm==0.6.3
Which version of swift
are you using? Maybe there is some incompatibility issue.
ms-swift-2.5.0.dev0
` git clone #https://github.com/modelscope/swift.git
cd swift
pip install -e .[llm] `
Which version of
swift
are you using? Maybe there is some incompatibility issue.
I suggest that you open an issue on their repo explaining this problem, since it isn't maintained by us.
Upgrade vllm to the latest version. The problem has been solved.
Your current environment
bug of vllm 0.6.3
```text ImportError: cannot import name 'get_vllm_engine' from 'swift.llm' ```Model Input Dumps
No response
🐛 Describe the bug
It happens when I run "import get_vllm_engine from vllm.utils". After I installed the vllm 0.6.2. No bugs happen.
Before submitting a new issue...