Open ghost opened 2 months ago
Hi, @shur-complement Thanks for pointing this issue out. Since vision models might bring in unnecessary dependencies for LLM models, we let users handle them case by case. We have to admit it is not convenient.
@AllentDan @irexyc How about making another dependent requirement file 'vision.txt' for VLM models?
We can use pip install lmdeploy[vision]
to install the dependent packages of VLM models
Or, any other ideas?
What packages will be put into the text? There are many packages in the third party repo like llava. Different repo may use conflicting python packages.
Can --no-deps
eliminate conflicts?
Can
--no-deps
eliminate conflicts?
Looks good to me
@lvhan028 Maybe we could lock the commit id of VL repos in the vl.txt? Suffered from minigemini.
I think it would also be reasonable to update the documentation, mentioning this issue. I can update the docs if desired in a PR.
That's very kind of you. Look forward to your PR
@lvhan028 make the user guide for each VLM as discussed internally.
Checklist
Describe the bug
Hi folks, thanks for this great project. I am raising this issue in case it helps anyone else with Docker.
When running different models with Docker as per the instructions at Option 2: Deploying with Docker, I have run into various issues with Python dependencies. This has been resolved by creating a new
Dockerfile
with the necessary Python deps installed. For example, when running Llava 1.6 34B one runs into dependency issues such as no such moduletimm
, and "install llava at git@...":The following Dockerfile works for me for Llava:
Likewise for Yi-VL.
For Deepseek-VL, this worked:
Reproduction
Environment
Error traceback
No response