InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.17k stars 376 forks source link

[Bug] 使用LM启动API服务器的InternVL-1.5无法识别图片 #1701

Closed BigWhiteFox closed 3 months ago

BigWhiteFox commented 3 months ago

Checklist

Describe the bug

使用LMDeploy成功启动了API服务器,然后使用Chat Completions V1的post在本地搭建了一个gradio页面。按照格式聊天对话可以正常跑起来,但是我想上传图片让图像识别,看了一下四个通道似乎没有搭建传输图片的格式。然后想起来实战营Xcomposer以前是把图片编码传输给模型的,我就RGB化了一下然后使用Base64编码附在content末尾。发送以后<Response [200]>,后端也显示INFO: 127.0.0.1:43626 - "POST /v1/chat/completions HTTP/1.1" 200 OK。但是print(json_response["choices"][0]["message"]["content"])无内容。 故想询问一下,是否有对图片来说是否独立的格式/方法从而经过api服务器与大模型对话,如果没有的话我的方法以及思路是否有问题,该如何修正。

Reproduction

代码: import gradio as gr import requests import base64 from io import BytesIO from PIL import Image

替换为您的 API URL

api_url = "http://0.0.0.0:23333/v1/chat/completions"

def image_to_base64(img): """Converts an image to a Base64 string in RGB mode."""

确保图片是 RGB 模式

if img.mode != "RGB":
    img = img.convert("RGB")

buffered = BytesIO()
img.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
return img_str

def generate_story(text_input, image_input=None): if image_input is not None:

将图片转换为Base64编码

    img_base64 = image_to_base64(image_input)
    # 将Base64编码附加到content后面,这里假设API能识别特定的分隔符
    # 使用分隔符插入 Base64 编码的图片
    content_with_image = f"{text_input} <Img><{img_base64}></Img>"
else:
    content_with_image = text_input

# 构建请求参数,与之前相同,但使用更新的content_with_image
data = {
    "model": "internvl-internlm2",
    "messages": [{"content": content_with_image, "role": "user"}],
    "temperature": 0.7,
    "top_p": 1,
    "logprobs": False,
    "top_logprobs": 0,
    "n": 1,
    "max_tokens": None,
    "stop": None,
    "stream": False,
    "presence_penalty": 0,
    "frequency_penalty": 0,
    "user": "string",
    "repetition_penalty": 1,
    "session_id": -1,
    "ignore_eos": False,
    "skip_special_tokens": True,
    "top_k": 40
}

# 发送POST请求和后续逻辑与之前一致
response = requests.post(api_url, json=data)
print(response)
print(response.status_code)
if response.status_code == 200:
    print("Response:", response)
    json_response = response.json()
    print(json_response["choices"][0]["message"]["content"])
    return json_response["choices"][0]["message"]["content"]
else:
    return {"error": f"API request failed with status code: {response.status_code}"}

更新Gradio接口以包含图片输入

iface = gr.Interface( fn=generate_story, inputs=[ gr.components.Textbox(placeholder="User message", label="Input Story"), gr.Image(type="pil", label="Upload an Image (optional)") ], outputs=gr.components.Textbox(placeholder="Story generated by the API", label="Generated Story") )

iface.launch()

两次对话演示 后端正常发送: 后端均正常发送 上传图片则无输出: 异常 去除图片则正常对话: 正常

Environment

(lmdeploy) root@intern-studio-40073828:~/models# lmdeploy check_env
sys.platform: linux
Python: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA A100-SXM4-80GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.2, V12.2.140
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
PyTorch: 2.2.2+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.5  (built against CUDA 11.7)
    - Built with CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.17.2+cu121
LMDeploy: 0.4.2+
transformers: 4.41.1
gradio: 3.50.2
fastapi: 0.111.0
pydantic: 2.7.1
triton: 2.2.0

Error traceback

No response

irexyc commented 3 months ago

"messages": [{"content": content_with_image, "role": "user"}],

这个格式不对,请参考gpt4v的格式。

https://platform.openai.com/docs/guides/vision https://github.com/InternLM/lmdeploy/blob/main/docs/zh_cn/serving/api_server_vl.md

BigWhiteFox commented 3 months ago

感谢指点,问题已解决。