Open weiminw opened 1 week ago
MiniCPM-llama3-V2_5 启动后使用image url 使用base64 没有回复结果
启动: LMDeploy server api_server /workspace/vlm/MiniCPM-Llama3-V-2_5 --model-name mini --backend pytorch --server-port 8000
api call:
{ "model": "mini", "max_tokens": 1024, "messages": [ { "role": "user", "content": [{"type": "text": "text":"描述你看到的图片"}, {"type":"image_url","image_url": {"url": "data:image/jpeg;base64,........."}}]
] }
响应 的 message 里面是空的.
sys.platform: linux Python: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 4090 CUDA_HOME: /usr/local/cuda NVCC: Not Available GCC: x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 PyTorch: 2.2.2+cu121 PyTorch compiling details: PyTorch built with: GCC 9.3 C++ Version: 201703 Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01) OpenMP 201511 (a.k.a. OpenMP 4.5) LAPACK is enabled (usually provided by MKL) NNPACK is enabled CPU capability usage: AVX2 CUDA Runtime 12.1 NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90 CuDNN 8.9.2 Magma 2.6.1 Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, TorchVision: 0.17.2+cu121 LMDeploy: 0.4.2+ transformers: 4.41.2 gradio: Not Found fastapi: 0.111.0 pydantic: 2.7.4 triton: 2.2.0
No response
启动的命令可以加上 --log-level INFO,然后看一下server侧的日志。
--log-level INFO
api call 可以把完整的请求发出来。可以写到文本文件中,然后上传到issue这里。
文件太大, 我尝试传一下. out.txt
MiniCPM-Llama3-V-2_5 目前不支持 pytorch backend, 可以把 --backend pytorch 去掉试试看
pytorch
--backend pytorch
Checklist
Describe the bug
MiniCPM-llama3-V2_5 启动后使用image url 使用base64 没有回复结果
Reproduction
启动: LMDeploy server api_server /workspace/vlm/MiniCPM-Llama3-V-2_5 --model-name mini --backend pytorch --server-port 8000
api call:
{ "model": "mini", "max_tokens": 1024, "messages": [ { "role": "user", "content": [{"type": "text": "text":"描述你看到的图片"}, {"type":"image_url","image_url": {"url": "data:image/jpeg;base64,........."}}]
] }
响应 的 message 里面是空的.
Environment
Error traceback
No response