labring / FastGPT

FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, letting you easily develop and deploy complex question-answering systems without the need for extensive setup or configuration.
https://fastgpt.in
Other
16.93k stars 4.52k forks source link

Connect error occurs #727

Closed lai-serena closed 8 months ago

lai-serena commented 8 months ago

例行检查

你的版本

问题描述 配置了oneapi,接入了chatglm2-6b的本地模型,在使用fastgpt时出现 connect error。 image

复现步骤 根据https://doc.fastgpt.in/docs/development/custom-models/chatglm2/ 教程 1、在docker上部署chatglm2-6b模型,使用openai_api.py文件,容器开放端口6006 openai_api.py文件如下:

# coding=utf-8
import argparse
import time
from contextlib import asynccontextmanager
from typing import List, Literal, Optional, Union

import numpy as np
import tiktoken
import torch
import uvicorn
from fastapi import Depends, FastAPI, HTTPException, Request
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel, Field
from sentence_transformers import SentenceTransformer
from sklearn.preprocessing import PolynomialFeatures
from sse_starlette.sse import EventSourceResponse
from starlette.status import HTTP_401_UNAUTHORIZED
from transformers import AutoModel, AutoTokenizer
import json

@asynccontextmanager
async def lifespan(app: FastAPI):  # collects GPU memory
    yield
    if torch.cuda.is_available():
        torch.cuda.empty_cache()
        torch.cuda.ipc_collect()

app = FastAPI(lifespan=lifespan)

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

class ChatMessage(BaseModel):
    role: Literal["user", "assistant", "system"]
    content: str

class DeltaMessage(BaseModel):
    role: Optional[Literal["user", "assistant", "system"]] = None
    content: Optional[str] = None

class ChatCompletionRequest(BaseModel):
    model: str
    messages: List[ChatMessage]
    temperature: Optional[float] = None
    top_p: Optional[float] = None
    max_length: Optional[int] = None
    stream: Optional[bool] = False

class ChatCompletionResponseChoice(BaseModel):
    index: int
    message: ChatMessage
    finish_reason: Literal["stop", "length"]

class ChatCompletionResponseStreamChoice(BaseModel):
    index: int
    delta: DeltaMessage
    finish_reason: Optional[Literal["stop", "length"]]

class ChatCompletionResponse(BaseModel):
    model: str
    object: Literal["chat.completion", "chat.completion.chunk"]
    choices: List[
        Union[ChatCompletionResponseChoice, ChatCompletionResponseStreamChoice]
    ]
    created: Optional[int] = Field(default_factory=lambda: int(time.time()))

async def verify_token(request: Request):
    auth_header = request.headers.get('Authorization')
    if auth_header:
        token_type, _, token = auth_header.partition(' ')
        if (
            token_type.lower() == "bearer"
            and token == "sk-aaabbbcccdddeeefffggghhhiiijjjkkk"
        ):  # 这里配置你的token
            return True
    raise HTTPException(
        status_code=HTTP_401_UNAUTHORIZED,
        detail="Invalid authorization credentials",
    )

class EmbeddingRequest(BaseModel):
    input: List[str]
    model: str

class EmbeddingResponse(BaseModel):
    data: list
    model: str
    object: str
    usage: dict

def num_tokens_from_string(string: str) -> int:
    """Returns the number of tokens in a text string."""
    encoding = tiktoken.get_encoding('cl100k_base')
    num_tokens = len(encoding.encode(string))
    return num_tokens

def expand_features(embedding, target_length):
    poly = PolynomialFeatures(degree=2)
    expanded_embedding = poly.fit_transform(embedding.reshape(1, -1))
    expanded_embedding = expanded_embedding.flatten()
    if len(expanded_embedding) > target_length:
        # 如果扩展后的特征超过目标长度,可以通过截断或其他方法来减少维度
        expanded_embedding = expanded_embedding[:target_length]
    elif len(expanded_embedding) < target_length:
        # 如果扩展后的特征少于目标长度,可以通过填充或其他方法来增加维度
        expanded_embedding = np.pad(
            expanded_embedding, (0, target_length - len(expanded_embedding))
        )
    return expanded_embedding

@app.post("/v1/chat/completions", response_model=ChatCompletionResponse)
async def create_chat_completion(
    request: ChatCompletionRequest, token: bool = Depends(verify_token)
):
    global model, tokenizer

    if request.messages[-1].role != "user":
        raise HTTPException(status_code=400, detail="Invalid request")
    query = request.messages[-1].content

    prev_messages = request.messages[:-1]
    if len(prev_messages) > 0 and prev_messages[0].role == "system":
        query = prev_messages.pop(0).content + query

    history = []
    if len(prev_messages) % 2 == 0:
        for i in range(0, len(prev_messages), 2):
            if (
                prev_messages[i].role == "user"
                and prev_messages[i + 1].role == "assistant"
            ):
                history.append([prev_messages[i].content, prev_messages[i + 1].content])

    if request.stream:
        generate = predict(query, history, request.model)
        return EventSourceResponse(generate, media_type="text/event-stream")

    response, _ = model.chat(tokenizer, query, history=history)
    choice_data = ChatCompletionResponseChoice(
        index=0,
        message=ChatMessage(role="assistant", content=response),
        finish_reason="stop",
    )

    return ChatCompletionResponse(
        model=request.model, choices=[choice_data], object="chat.completion"
    )

async def predict(query: str, history: List[List[str]], model_id: str):
    global model, tokenizer

    choice_data = ChatCompletionResponseStreamChoice(
        index=0, delta=DeltaMessage(role="assistant"), finish_reason=None
    )
    chunk = ChatCompletionResponse(
        model=model_id, choices=[choice_data], object="chat.completion.chunk"
    )
    # yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))
    yield json.dumps(chunk.dict(exclude_unset=True), ensure_ascii=False)

    current_length = 0

    for new_response, _ in model.stream_chat(tokenizer, query, history):
        if len(new_response) == current_length:
            continue

        new_text = new_response[current_length:]
        current_length = len(new_response)

        choice_data = ChatCompletionResponseStreamChoice(
            index=0, delta=DeltaMessage(content=new_text), finish_reason=None
        )
        chunk = ChatCompletionResponse(
            model=model_id, choices=[choice_data], object="chat.completion.chunk"
        )
        # yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))
        yield json.dumps(chunk.dict(exclude_unset=True), ensure_ascii=False)

    choice_data = ChatCompletionResponseStreamChoice(
        index=0, delta=DeltaMessage(), finish_reason="stop"
    )
    chunk = ChatCompletionResponse(
        model=model_id, choices=[choice_data], object="chat.completion.chunk"
    )
    # yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))
    yield json.dumps(chunk.dict(exclude_unset=True), ensure_ascii=False)
    yield '[DONE]'

@app.post("/v1/embeddings", response_model=EmbeddingResponse)
async def get_embeddings(
    request: EmbeddingRequest, token: bool = Depends(verify_token)
):
    # 计算嵌入向量和tokens数量
    embeddings = [embeddings_model.encode(text) for text in request.input]

    # 如果嵌入向量的维度不为1536,则使用插值法扩展至1536维度
    embeddings = [
        expand_features(embedding, 1536) if len(embedding) < 1536 else embedding
        for embedding in embeddings
    ]

    # Min-Max normalization 归一化
    embeddings = [embedding / np.linalg.norm(embedding) for embedding in embeddings]

    # 将numpy数组转换为列表
    embeddings = [embedding.tolist() for embedding in embeddings]
    prompt_tokens = sum(len(text.split()) for text in request.input)
    total_tokens = sum(num_tokens_from_string(text) for text in request.input)

    response = {
        "data": [
            {"embedding": embedding, "index": index, "object": "embedding"}
            for index, embedding in enumerate(embeddings)
        ],
        "model": request.model,
        "object": "list",
        "usage": {
            "prompt_tokens": prompt_tokens,
            "total_tokens": total_tokens,
        },
    }

    return response

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--model_name", default="16", type=str, help="Model name")
    args = parser.parse_args()

    model_dict = {
        # "4": "THUDM/chatglm2-6b-int4",
        # "8": "THUDM/chatglm2-6b-int8",
        "16": "/workspace/ChatGLM2-6B-main/model/chatglm2-6b",
    }

    model_name = model_dict.get(args.model_name, "/workspace/ChatGLM2-6B-main/model/chatglm2-6b")

    tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
    model = AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda()
    embeddings_model = SentenceTransformer('/workspace/ChatGLM2-6B-main/model/m3e-base', device='cuda')

    uvicorn.run(app, host='0.0.0.0', port=6006, workers=1)

但是后台会出现这个警告: image

2、在docker配置oneapi,容器开放端口3989 新建了渠道 image 新建了令牌 image 但是在测试渠道的时候会出现错误: image 3、在docker 配置fastgpt,开放端口3000 根据教程https://doc.fastgpt.in/docs/development/docker/ config.json文件如下:

{
  "systemEnv": {
    "openapiPrefix": "fastgpt",
    "vectorMaxProcess": 15,
    "qaMaxProcess": 15,
    "pgHNSWEfSearch": 100
  },
  "chatModels": [
    {
      "model": "gpt-3.5-turbo",
      "name": "GPT35",
      "inputPrice": 0,
      "outputPrice": 0,
      "maxContext": 4000,
      "maxResponse": 4000,
      "quoteMaxToken": 2000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": false,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "gpt-3.5-turbo-16k",
      "name": "GPT35-16k",
      "maxContext": 16000,
      "maxResponse": 16000,
      "inputPrice": 0,
      "outputPrice": 0,
      "quoteMaxToken": 8000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": false,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "gpt-4",
      "name": "GPT4-8k",
      "maxContext": 8000,
      "maxResponse": 8000,
      "inputPrice": 0,
      "outputPrice": 0,
      "quoteMaxToken": 4000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": false,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "gpt-4-vision-preview",
      "name": "GPT4-Vision",
      "maxContext": 128000,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0,
      "quoteMaxToken": 100000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": true,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "chatglm2",
      "name": "chatglm2",
      "maxContext": 4000,
      "maxResponse": 4000,
      "quoteMaxToken": 2000,
      "maxTemperature": 1,
      "vision": false,
      "defaultSystemChatPrompt": ""
    }

  ],
  "qaModels": [
    {
      "model": "gpt-3.5-turbo-16k",
      "name": "GPT35-16k",
      "maxContext": 16000,
      "maxResponse": 16000,
      "inputPrice": 0,
      "outputPrice": 0
    }
  ],
  "cqModels": [
    {
      "model": "gpt-3.5-turbo",
      "name": "GPT35",
      "maxContext": 4000,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0,
      "toolChoice": true,
      "functionPrompt": ""
    },
    {
      "model": "gpt-4",
      "name": "GPT4-8k",
      "maxContext": 8000,
      "maxResponse": 8000,
      "inputPrice": 0,
      "outputPrice": 0,
      "toolChoice": true,
      "functionPrompt": ""
    }
  ],
  "extractModels": [
    {
      "model": "gpt-3.5-turbo-1106",
      "name": "GPT35-1106",
      "maxContext": 16000,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0,
      "toolChoice": true,
      "functionPrompt": ""
    }
  ],
  "qgModels": [
    {
      "model": "gpt-3.5-turbo-1106",
      "name": "GPT35-1106",
      "maxContext": 1600,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0
    }
  ],
  "vectorModels": [
    {
      "model": "text-embedding-ada-002",
      "name": "Embedding-2",
      "inputPrice": 0,
      "outputPrice": 0,
      "defaultToken": 700,
      "maxToken": 3000,
      "weight": 100
    }
  ],
  "reRankModels": [],
  "audioSpeechModels": [
    {
      "model": "tts-1",
      "name": "OpenAI TTS1",
      "inputPrice": 0,
      "outputPrice": 0,
      "voices": [
        { "label": "Alloy", "value": "alloy", "bufferId": "openai-Alloy" },
        { "label": "Echo", "value": "echo", "bufferId": "openai-Echo" },
        { "label": "Fable", "value": "fable", "bufferId": "openai-Fable" },
        { "label": "Onyx", "value": "onyx", "bufferId": "openai-Onyx" },
        { "label": "Nova", "value": "nova", "bufferId": "openai-Nova" },
        { "label": "Shimmer", "value": "shimmer", "bufferId": "openai-Shimmer" }
      ]
    }
  ],
  "whisperModel": {
    "model": "whisper-1",
    "name": "Whisper1",
    "inputPrice": 0,
    "outputPrice": 0
  }
}

docker-compose.yml 文件如下:

# 非 host 版本, 不使用本机代理
# (不懂 Docker 的,只需要关心 OPENAI_BASE_URL 和 CHAT_API_KEY 即可!)
version: '3.3'
services:
  pg:
    image: ankane/pgvector:v0.5.0 # git
    # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.5.0 # 阿里云
    container_name: pg
    restart: always
    ports: # 生产环境建议不要暴露
      - 5432:5432
    networks:
      - fastgpt
    environment:
      # 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
      - POSTGRES_USER=username
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=postgres
    volumes:
      - ./pg/data:/var/lib/postgresql/data
  mongo:
    image: mongo:5.0.18
    # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
    container_name: mongo
    restart: always
    ports: # 生产环境建议不要暴露
      - 27017:27017
    networks:
      - fastgpt
    environment:
      # 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
      - MONGO_INITDB_ROOT_USERNAME=username
      - MONGO_INITDB_ROOT_PASSWORD=password
    volumes:
      - ./mongo/data:/data/db
  fastgpt:
    container_name: fastgpt
    image: ghcr.io/labring/fastgpt:latest # git
    # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:latest # 阿里云
    ports:
      - 3000:3000
    networks:
      - fastgpt
    depends_on:
      - mongo
      - pg
    restart: always
    environment:
      # root 密码,用户名为: root
      - DEFAULT_ROOT_PSW=1234
      # 中转地址,如果是用官方号,不需要管。务必加 /v1
      - OPENAI_BASE_URL=https://localhost:3989/v1
      - CHAT_API_KEY=sk-0KI6mUomMPzLKPpZ78C6Ef50511748Ec8b4361306911D330
      - DB_MAX_LINK=5 # database max link
      - TOKEN_KEY=any
      - ROOT_KEY=root_key
      - FILE_TOKEN_KEY=filetoken
      # mongo 配置,不需要改. 如果连不上,可能需要去掉 ?authSource=admin
      - MONGODB_URI=mongodb://username:password@mongo:27017/fastgpt?authSource=admin
      # pg配置. 不需要改
      - PG_URL=postgresql://username:password@pg:5432/postgres
    volumes:
      - ./config.json:/app/data/config.json
networks:
  fastgpt:

image

预期结果

相关截图

c121914yu commented 8 months ago

需要补补docker容器网络关系。localhost 指向容器自身,oneapi肯定连不上。 Oneapi测试问题,也没上已经很清楚了,只有gpt的可以测试,其他的需要自己 curl 测试。

c121914yu commented 3 months ago

放上docker内部网络也报错哦

说明你不会放~

swl632 commented 1 week ago

需要补补docker容器网络关系。localhost 指向容器自身,oneapi肯定连不上。 Oneapi测试问题,也没上已经很清楚了,只有gpt的可以测试,其他的需要自己 curl 测试。

哥哥,这个如何解决呢?我现在应该遇到这个问题了,求解决方案,谢谢!!!!

swl632 commented 1 week ago

需要补补docker容器网络关系。localhost 指向容器自身,oneapi肯定连不上。 Oneapi测试问题,也没上已经很清楚了,只有gpt的可以测试,其他的需要自己 curl 测试。

哥哥,这个如何解决呢?我现在应该遇到这个问题了,求解决方案,谢谢!!!!

目前我的oneapi调用是没有问题的,但是容器中的fastgpt调用模型时,提示Connection error,应该是你描述的容器网络调用有关