InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
3.18k stars 284 forks source link

[Bug] Issues Running Vision Language Models in Docker #1514

Open ghost opened 2 months ago

ghost commented 2 months ago

Checklist

Describe the bug

Hi folks, thanks for this great project. I am raising this issue in case it helps anyone else with Docker.

When running different models with Docker as per the instructions at Option 2: Deploying with Docker, I have run into various issues with Python dependencies. This has been resolved by creating a new Dockerfile with the necessary Python deps installed. For example, when running Llava 1.6 34B one runs into dependency issues such as no such module timm, and "install llava at git@...":

The following Dockerfile works for me for Llava:

FROM openmmlab/lmdeploy:latest 

RUN apt-get update && apt-get install -y python3 python3-pip git

WORKDIR /app

RUN pip3 install --upgrade pip
RUN pip3 install timm
RUN pip3 install git+https://github.com/haotian-liu/LLaVA.git --no-deps

COPY . .

CMD ["lmdeploy", "serve", "api_server", "liuhaotian/llava-v1.6-34b"]

Likewise for Yi-VL.

For Deepseek-VL, this worked:

FROM openmmlab/lmdeploy:latest

RUN apt-get update && apt-get install -y python3 python3-pip git

WORKDIR /app

RUN pip3 install --upgrade pip
RUN pip3 install git+https://github.com/deepseek-ai/DeepSeek-VL.git --no-deps
RUN pip3 install attrdict
RUN pip3 install timm

COPY . .

CMD ["lmdeploy", "serve", "api_server", "deepseek-ai/deepseek-vl-7b-chat"]

Reproduction

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=<secret>" \
    -p 23333:23333 \
    --ipc=host \
    openmmlab/lmdeploy:latest \
    lmdeploy serve api_server liuhaotian/llava-v1.6-34b

Environment

Ubuntu 22.04

Error traceback

No response

lvhan028 commented 2 months ago

Hi, @shur-complement Thanks for pointing this issue out. Since vision models might bring in unnecessary dependencies for LLM models, we let users handle them case by case. We have to admit it is not convenient.

@AllentDan @irexyc How about making another dependent requirement file 'vision.txt' for VLM models? We can use pip install lmdeploy[vision] to install the dependent packages of VLM models

Or, any other ideas?

AllentDan commented 2 months ago

What packages will be put into the text? There are many packages in the third party repo like llava. Different repo may use conflicting python packages.

lvhan028 commented 2 months ago

Can --no-deps eliminate conflicts?

AllentDan commented 2 months ago

Can --no-deps eliminate conflicts?

Looks good to me

AllentDan commented 2 months ago

@lvhan028 Maybe we could lock the commit id of VL repos in the vl.txt? Suffered from minigemini.

ghost commented 2 months ago

I think it would also be reasonable to update the documentation, mentioning this issue. I can update the docs if desired in a PR.

lvhan028 commented 2 months ago

That's very kind of you. Look forward to your PR

lvhan028 commented 3 weeks ago

@lvhan028 make the user guide for each VLM as discussed internally.