InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.16k stars 376 forks source link

[Bug] pipeline.stream_infer无法支持openai format history(text+image),也无法支持sess用法 #2179

Open owl-10 opened 1 month ago

owl-10 commented 1 month ago

Checklist

Describe the bug

由于需要使用多模态模型对话,且流式输出,采用了pipeline.stream_infer接口,但是无法支持类似chat里面的sess history,pipeline.stream_infer的文档也没有,只有一个简单的demo,只能查找源码发现pipeline.stream_infer输入prompts支持string prompt, a list of string prompts,a chat history in OpenAI format or a list of chat history,采用a chat history in OpenAI format输入,text+image,会报错

Reproduction

prompts = [ { 'role': 'user', 'content': [ {'type': 'text', 'text': 'describe this image'}, {'type': 'image_url', 'image_url': {'url': 'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg'}} ] } ] for response_part in generate_response([prompts]): yield response_part File "/root/miniconda3/envs/cup/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/root/miniconda3/envs/cup/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/serve/async_engine.py", line 495, in proc = Thread(target=lambda: loop.run_until_complete(gather())) File "/root/miniconda3/envs/cup/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/serve/async_engine.py", line 490, in gather await asyncio.gather( File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/serve/async_engine.py", line 483, in _inner_call async for out in generator: File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/serve/async_engine.py", line 571, in generate prompt_input = await self._get_prompt_input(prompt, File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/serve/vl_async_engine.py", line 55, in _get_prompt_input decorated = self.vl_prompt_template.messages2prompt( File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/vl/templates.py", line 139, in messages2prompt new_messages = self.convert_messages(messages, sequence_start) File "/root/miniconda3/envs/cup/lib/python3.8/site-packages/lmdeploy/vl/templates.py", line 121, in convert_messages if item['type'] == 'image_url':

Environment

sys.platform: linux
Python: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA A100-PCIE-40GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.1, V12.1.105
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.2.2+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.17.2+cu121
LMDeploy: 0.5.1+
transformers: 4.37.2
gradio: 4.36.0
fastapi: 0.111.1
pydantic: 2.8.2
triton: 2.2.0
NVIDIA Topology: 
        GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      0-79    0-1             N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks
采用internvl2-2b模型

Error traceback

No response

lvhan028 commented 4 weeks ago

设计的时候,没有考虑 stream_infer with session 的用法

owl-10 commented 3 weeks ago

设计的时候,没有考虑 stream_infer with session 的用法

但是如果想stream_infer能够多模态历史对话,采用openai-prompt也会出错