xorbitsai / inference

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
https://inference.readthedocs.io
Apache License 2.0
5.41k stars 438 forks source link

deepseek-coder-6.7b-instruct 部署出错 #1755

Closed xxch closed 4 months ago

xxch commented 4 months ago

Describe the bug

A800 80G 内存32G 已经部署了一个Qwen-57B-q5的模型用了40G显存,再部署deepseek-coder-6.7b-instruct 就报资源不足

deeseek38

To Reproduce

To help us to reproduce this bug, please provide information below:

  1. Your Python version. python 3.11
  2. The version of xinference you use. 0.12.1
  3. Versions of crucial packages.
  4. Full stack of the error.
  5. Minimized code to reproduce the error.
ChengjieLi28 commented 4 months ago

现在一张卡只能跑一个LLM。想绕过这一限制(你确定你的显存够的情况下,你目前80G显存应该可以),launch的时候指定gpu_idx,参考https://inference.readthedocs.io/en/latest/reference/generated/xinference.client.Client.launch_model.html

xxch commented 4 months ago

如果关掉其他模型,只运行deepseek-coder-6.7b 也会报错 deep53

ChengjieLi28 commented 4 months ago

@xxch 不要只描述现象,提供详细信息。 看样子是自定义注册的?怎么注册的,注册时候各种参数怎么填的? 启动的时候选择的什么推理引擎?后端完整报错是什么?

ChengjieLi28 commented 4 months ago

我启动内置的deepseek-coder-instruct 1.3b 一切正常。 我的transformers版本:4.41.2

xxch commented 4 months ago

使用的是自定义注册。 Model Family : deepseek-coder model specs中的model Format 是 Pytorch 自定义启动的是选择Transformers,其他都是默认的。 Server error: 400 - [address=0.0.0.0:38081, pid=2549300] Unrecognized image processor in /app/models/deepseek-coder-6.7b-instruct. Should have aimage_processor_typekey in its preprocessor_config.json of config.json, or one of the followingmodel_typekeys in its config.json: align, beit, bit, blip, blip-2, bridgetower, chinese_clip, clip, clipseg, conditional_detr, convnext, convnextv2, cvt, data2vec-vision, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, donut-swin, dpt, efficientformer, efficientnet, flava, focalnet, fuyu, git, glpn, grounding-dino, groupvit, idefics, idefics2, imagegpt, instructblip, kosmos-2, layoutlmv2, layoutlmv3, levit, llava, llava_next, mask2former, maskformer, mgp-str, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, nat, nougat, oneformer, owlv2, owlvit, paligemma, perceiver, pix2struct, poolformer, pvt, pvt_v2, regnet, resnet, sam, segformer, seggpt, siglip, swiftformer, swin, swin2sr, swinv2, table-transformer, timesformer, tvlt, tvp, udop, upernet, van, video_llava, videomae, vilt, vipllava, vit, vit_hybrid, vit_mae, vit_msn, vitmatte, xclip, yolos

ChengjieLi28 commented 4 months ago

使用的是自定义注册。 Model Family : deepseek-coder model specs中的model Format 是 Pytorch 自定义启动的是选择Transformers,其他都是默认的。 Server error: 400 - [address=0.0.0.0:38081, pid=2549300] Unrecognized image processor in /app/models/deepseek-coder-6.7b-instruct. Should have aimage_processor_typekey in its preprocessor_config.json of config.json, or one of the followingmodel_typekeys in its config.json: align, beit, bit, blip, blip-2, bridgetower, chinese_clip, clip, clipseg, conditional_detr, convnext, convnextv2, cvt, data2vec-vision, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, donut-swin, dpt, efficientformer, efficientnet, flava, focalnet, fuyu, git, glpn, grounding-dino, groupvit, idefics, idefics2, imagegpt, instructblip, kosmos-2, layoutlmv2, layoutlmv3, levit, llava, llava_next, mask2former, maskformer, mgp-str, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, nat, nougat, oneformer, owlv2, owlvit, paligemma, perceiver, pix2struct, poolformer, pvt, pvt_v2, regnet, resnet, sam, segformer, seggpt, siglip, swiftformer, swin, swin2sr, swinv2, table-transformer, timesformer, tvlt, tvp, udop, upernet, van, video_llava, videomae, vilt, vipllava, vit, vit_hybrid, vit_mae, vit_msn, vitmatte, xclip, yolos

注册的时候选择chat了吗?选择chat, model family应该是deepseek-coder-instruct 我看报错怎么走到了deepseek-vl的逻辑里面,怎么image processor都出来了。应该还是注册的问题。

xxch commented 4 months ago

多谢,我这解决了,确实应该选deepseek-coder-instruct