chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
30.89k stars 5.4k forks source link

[BUG] 同一个模型使用Docker运行正常,使用K8S编排后启动报错 #4162

Closed yongxingMa closed 2 months ago

yongxingMa commented 2 months ago

问题描述 / Problem Description 根据Dockerfile构建的镜像以后,使用docker命令启动成功,访问正常。 docker执行命令如下: docker run -d --gpus all -v /home/chatglm3-6b:/Langchain-Chatchat/chatglm3-6b -p 8501:8501 registry.cn-hangzhou.aliyuncs.com/smart33690/chat-chatglm36b:0.6

使用k8s部署如下:

kind: Deployment
metadata:
  generation: 1
  labels:
    app.kubernetes.io/managed-by: Helm
    app.kubesphere.io/instance: chatglm-bbmn3p
  name: chatglm3-6b-deployment
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: chatglm3-6b-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubesphere.io/creator: igwadmin2
      creationTimestamp: null
      labels:
        app: chatglm3-6b-app
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/igw/chat-model:1.0
        imagePullPolicy: IfNotPresent
        name: chatglm3-6b-container
        ports:
        - containerPort: 8501
          hostPort: 8501
          protocol: TCP
        resources:
          limits:
            nvidia.com/gpu: "1"
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /Langchain-Chatchat/chatglm3-6b
          name: model-volume
        - mountPath: /Langchain-Chatchat/configs/model_config.py
          name: model-config
          subPath: model_config.py
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /home/chatglm3-6b
          type: Directory
        name: model-volume
      - configMap:
          defaultMode: 420
          name: model-config-map
        name: model-config```

configmap文件:

```apiVersion: v1
data:
  model_config.py: |
    import os

    # 可以指定一个绝对路径,统一存放所有的Embedding和LLM模型。
    # 每个模型可以是一个单独的目录,也可以是某个目录下的二级子目录。
    # 如果模型目录名称和 MODEL_PATH 中的 key 或 value 相同,程序会自动检测加载,无需修改 MODEL_PATH 中的路径。
    # MODEL_ROOT_PATH = ""
    MODEL_ROOT_PATH = "/Langchain-Chatchat"

    # 选用的 Embedding 名称
    EMBEDDING_MODEL = "bge-large-zh-v1.5"

    # Embedding 模型运行设备。设为 "auto" 会自动检测(会有警告),也可手动设定为 "cuda","mps","cpu","xpu" 其中之一。
    EMBEDDING_DEVICE = "auto"

    # 选用的reranker模型
    RERANKER_MODEL = "bge-reranker-large"
    # 是否启用reranker模型
    USE_RERANKER = False
    RERANKER_MAX_LENGTH = 1024

    # 如果需要在 EMBEDDING_MODEL 中增加自定义的关键字时配置
    EMBEDDING_KEYWORD_FILE = "keywords.txt"
    EMBEDDING_MODEL_OUTPUT_PATH = "output"

    # 要运行的 LLM 名称,可以包括本地模型和在线模型。列表中本地模型将在启动项目时全部加载。
    # 列表中第一个模型将作为 API 和 WEBUI 的默认模型。
    # 在这里,我们使用目前主流的两个离线模型,其中,chatglm3-6b 为默认加载模型。
    # 如果你的显存不足,可使用 Qwen-1_8B-Chat, 该模型 FP16 仅需 3.8G显存。

    LLM_MODELS = ["chatglm3-6b"]
    Agent_MODEL = None

    # LLM 模型运行设备。设为"auto"会自动检测(会有警告),也可手动设定为 "cuda","mps","cpu","xpu" 其中之一。
    LLM_DEVICE = "auto"

    HISTORY_LEN = 3

    MAX_TOKENS = 2048

    TEMPERATURE = 0.7

    ONLINE_LLM_MODEL = {
        "openai-api": {
            "model_name": "gpt-4",
            "api_base_url": "https://api.openai.com/v1",
            "api_key": "",
            "openai_proxy": "",
        },

        # 智谱AI API,具体注册及api key获取请前往 http://open.bigmodel.cn
        "zhipu-api": {
            "api_key": "",
            "version": "glm-4",
            "provider": "ChatGLMWorker",
        },

        # 具体注册及api key获取请前往 https://api.minimax.chat/
        "minimax-api": {
            "group_id": "",
            "api_key": "",
            "is_pro": False,
            "provider": "MiniMaxWorker",
        },

        # 具体注册及api key获取请前往 https://xinghuo.xfyun.cn/
        "xinghuo-api": {
            "APPID": "",
            "APISecret": "",
            "api_key": "",
            "version": "v3.5", # 你使用的讯飞星火大模型版本,可选包括 "v3.5","v3.0", "v2.0", "v1.5"
            "provider": "XingHuoWorker",
        },

        # 百度千帆 API,申请方式请参考 https://cloud.baidu.com/doc/WENXINWORKSHOP/s/4lilb2lpf
        "qianfan-api": {
            "version": "ERNIE-Bot",  # 注意大小写。当前支持 "ERNIE-Bot" 或 "ERNIE-Bot-turbo", 更多的见官方文档。
            "version_url": "",  # 也可以不填写version,直接填写在千帆申请模型发布的API地址
            "api_key": "",
            "secret_key": "",
            "provider": "QianFanWorker",
        },

        # 火山方舟 API,文档参考 https://www.volcengine.com/docs/82379
        "fangzhou-api": {
            "version": "", # 对应火山方舟的 endpoint_id
            "version_url": "",
            "api_key": "",
            "secret_key": "",
            "provider": "FangZhouWorker",
        },

        # 阿里云通义千问 API,文档参考 https://help.aliyun.com/zh/dashscope/developer-reference/api-details
        "qwen-api": {
            "version": "qwen-max",
            "api_key": "",
            "provider": "QwenWorker",
            "embed_model": "text-embedding-v1"  # embedding 模型名称
        },

        # 百川 API,申请方式请参考 https://www.baichuan-ai.com/home#api-enter
        "baichuan-api": {
            "version": "Baichuan2-53B",
            "api_key": "",
            "secret_key": "",
            "provider": "BaiChuanWorker",
        },

        # Azure API
        "azure-api": {
            "deployment_name": "",  # 部署容器的名字
            "resource_name": "",  # https://{resource_name}.openai.azure.com/openai/ 填写resource_name的部分,其他部分不要填写
            "api_version": "",  # API的版本,不是模型版本
            "api_key": "",
            "provider": "AzureWorker",
        },

        # 昆仑万维天工 API https://model-platform.tiangong.cn/
        "tiangong-api": {
            "version": "SkyChat-MegaVerse",
            "api_key": "",
            "secret_key": "",
            "provider": "TianGongWorker",
        },

        # Gemini API https://makersuite.google.com/app/apikey
        "gemini-api": {
            "api_key": "",
            "provider": "GeminiWorker",
        }

    }

    # 在以下字典中修改属性值,以指定本地embedding模型存储位置。支持3种设置方法:
    # 1、将对应的值修改为模型绝对路径
    # 2、不修改此处的值(以 text2vec 为例):
    #       2.1 如果{MODEL_ROOT_PATH}下存在如下任一子目录:
    #           - text2vec
    #           - GanymedeNil/text2vec-large-chinese
    #           - text2vec-large-chinese
    #       2.2 如果以上本地路径不存在,则使用huggingface模型

    MODEL_PATH = {
        "embed_model": {
            "ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
            "ernie-base": "nghuyong/ernie-3.0-base-zh",
            "text2vec-base": "shibing624/text2vec-base-chinese",
            "text2vec": "GanymedeNil/text2vec-large-chinese",
            "text2vec-paraphrase": "shibing624/text2vec-base-chinese-paraphrase",
            "text2vec-sentence": "shibing624/text2vec-base-chinese-sentence",
            "text2vec-multilingual": "shibing624/text2vec-base-multilingual",
            "text2vec-bge-large-chinese": "shibing624/text2vec-bge-large-chinese",
            "m3e-small": "moka-ai/m3e-small",
            "m3e-base": "moka-ai/m3e-base",
            "m3e-large": "moka-ai/m3e-large",

            "bge-small-zh": "BAAI/bge-small-zh",
            "bge-base-zh": "BAAI/bge-base-zh",
            "bge-large-zh": "BAAI/bge-large-zh",
            "bge-large-zh-noinstruct": "BAAI/bge-large-zh-noinstruct",
            "bge-base-zh-v1.5": "BAAI/bge-base-zh-v1.5",
            "bge-large-zh-v1.5": "BAAI/bge-large-zh-v1.5",

            "bge-m3": "BAAI/bge-m3",

            "piccolo-base-zh": "sensenova/piccolo-base-zh",
            "piccolo-large-zh": "sensenova/piccolo-large-zh",
            "nlp_gte_sentence-embedding_chinese-large": "damo/nlp_gte_sentence-embedding_chinese-large",
            "text-embedding-ada-002": "your OPENAI_API_KEY",
        },

        "llm_model": {
            "chatglm2-6b": "THUDM/chatglm2-6b",
            "chatglm2-6b-32k": "THUDM/chatglm2-6b-32k",
            "chatglm3-6b": "THUDM/chatglm3-6b",
            "chatglm3-6b-32k": "THUDM/chatglm3-6b-32k",

            "Orion-14B-Chat": "OrionStarAI/Orion-14B-Chat",
            "Orion-14B-Chat-Plugin": "OrionStarAI/Orion-14B-Chat-Plugin",
            "Orion-14B-LongChat": "OrionStarAI/Orion-14B-LongChat",

            "Llama-2-7b-chat-hf": "meta-llama/Llama-2-7b-chat-hf",
            "Llama-2-13b-chat-hf": "meta-llama/Llama-2-13b-chat-hf",
            "Llama-2-70b-chat-hf": "meta-llama/Llama-2-70b-chat-hf",

            "Qwen-1_8B-Chat": "Qwen/Qwen-1_8B-Chat",
            "Qwen-7B-Chat": "Qwen/Qwen-7B-Chat",
            "Qwen-14B-Chat": "Qwen/Qwen-14B-Chat",
            "Qwen-72B-Chat": "Qwen/Qwen-72B-Chat",

            # Qwen1.5 模型 VLLM可能出现问题
            "Qwen1.5-0.5B-Chat": "Qwen/Qwen1.5-0.5B-Chat",
            "Qwen1.5-1.8B-Chat": "Qwen/Qwen1.5-1.8B-Chat",
            "Qwen1.5-4B-Chat": "Qwen/Qwen1.5-4B-Chat",
            "Qwen1.5-7B-Chat": "Qwen/Qwen1.5-7B-Chat",
            "Qwen1.5-14B-Chat": "Qwen/Qwen1.5-14B-Chat",
            "Qwen1.5-72B-Chat": "Qwen/Qwen1.5-72B-Chat",

            "baichuan-7b-chat": "baichuan-inc/Baichuan-7B-Chat",
            "baichuan-13b-chat": "baichuan-inc/Baichuan-13B-Chat",
            "baichuan2-7b-chat": "baichuan-inc/Baichuan2-7B-Chat",
            "baichuan2-13b-chat": "baichuan-inc/Baichuan2-13B-Chat",

            "internlm-7b": "internlm/internlm-7b",
            "internlm-chat-7b": "internlm/internlm-chat-7b",
            "internlm2-chat-7b": "internlm/internlm2-chat-7b",
            "internlm2-chat-20b": "internlm/internlm2-chat-20b",

            "BlueLM-7B-Chat": "vivo-ai/BlueLM-7B-Chat",
            "BlueLM-7B-Chat-32k": "vivo-ai/BlueLM-7B-Chat-32k",

            "Yi-34B-Chat": "https://huggingface.co/01-ai/Yi-34B-Chat",

            "agentlm-7b": "THUDM/agentlm-7b",
            "agentlm-13b": "THUDM/agentlm-13b",
            "agentlm-70b": "THUDM/agentlm-70b",

            "falcon-7b": "tiiuae/falcon-7b",
            "falcon-40b": "tiiuae/falcon-40b",
            "falcon-rw-7b": "tiiuae/falcon-rw-7b",

            "aquila-7b": "BAAI/Aquila-7B",
            "aquilachat-7b": "BAAI/AquilaChat-7B",
            "open_llama_13b": "openlm-research/open_llama_13b",
            "vicuna-13b-v1.5": "lmsys/vicuna-13b-v1.5",
            "koala": "young-geng/koala",
            "mpt-7b": "mosaicml/mpt-7b",
            "mpt-7b-storywriter": "mosaicml/mpt-7b-storywriter",
            "mpt-30b": "mosaicml/mpt-30b",
            "opt-66b": "facebook/opt-66b",
            "opt-iml-max-30b": "facebook/opt-iml-max-30b",
            "gpt2": "gpt2",
            "gpt2-xl": "gpt2-xl",
            "gpt-j-6b": "EleutherAI/gpt-j-6b",
            "gpt4all-j": "nomic-ai/gpt4all-j",
            "gpt-neox-20b": "EleutherAI/gpt-neox-20b",
            "pythia-12b": "EleutherAI/pythia-12b",
            "oasst-sft-4-pythia-12b-epoch-3.5": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
            "dolly-v2-12b": "databricks/dolly-v2-12b",
            "stablelm-tuned-alpha-7b": "stabilityai/stablelm-tuned-alpha-7b",
        },

        "reranker": {
            "bge-reranker-large": "BAAI/bge-reranker-large",
            "bge-reranker-base": "BAAI/bge-reranker-base",
        }
    }

    # 通常情况下不需要更改以下内容

    # nltk 模型存储路径
    NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "nltk_data")

    # 使用VLLM可能导致模型推理能力下降,无法完成Agent任务
    VLLM_MODEL_DICT = {
        "chatglm2-6b": "THUDM/chatglm2-6b",
        "chatglm2-6b-32k": "THUDM/chatglm2-6b-32k",
        "chatglm3-6b": "THUDM/chatglm3-6b",
        "chatglm3-6b-32k": "THUDM/chatglm3-6b-32k",

        "Llama-2-7b-chat-hf": "meta-llama/Llama-2-7b-chat-hf",
        "Llama-2-13b-chat-hf": "meta-llama/Llama-2-13b-chat-hf",
        "Llama-2-70b-chat-hf": "meta-llama/Llama-2-70b-chat-hf",

        "Qwen-1_8B-Chat": "Qwen/Qwen-1_8B-Chat",
        "Qwen-7B-Chat": "Qwen/Qwen-7B-Chat",
        "Qwen-14B-Chat": "Qwen/Qwen-14B-Chat",
        "Qwen-72B-Chat": "Qwen/Qwen-72B-Chat",

        "baichuan-7b-chat": "baichuan-inc/Baichuan-7B-Chat",
        "baichuan-13b-chat": "baichuan-inc/Baichuan-13B-Chat",
        "baichuan2-7b-chat": "baichuan-inc/Baichuan-7B-Chat",
        "baichuan2-13b-chat": "baichuan-inc/Baichuan-13B-Chat",

        "BlueLM-7B-Chat": "vivo-ai/BlueLM-7B-Chat",
        "BlueLM-7B-Chat-32k": "vivo-ai/BlueLM-7B-Chat-32k",

        "internlm-7b": "internlm/internlm-7b",
        "internlm-chat-7b": "internlm/internlm-chat-7b",
        "internlm2-chat-7b": "internlm/Models/internlm2-chat-7b",
        "internlm2-chat-20b": "internlm/Models/internlm2-chat-20b",

        "aquila-7b": "BAAI/Aquila-7B",
        "aquilachat-7b": "BAAI/AquilaChat-7B",

        "falcon-7b": "tiiuae/falcon-7b",
        "falcon-40b": "tiiuae/falcon-40b",
        "falcon-rw-7b": "tiiuae/falcon-rw-7b",
        "gpt2": "gpt2",
        "gpt2-xl": "gpt2-xl",
        "gpt-j-6b": "EleutherAI/gpt-j-6b",
        "gpt4all-j": "nomic-ai/gpt4all-j",
        "gpt-neox-20b": "EleutherAI/gpt-neox-20b",
        "pythia-12b": "EleutherAI/pythia-12b",
        "oasst-sft-4-pythia-12b-epoch-3.5": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
        "dolly-v2-12b": "databricks/dolly-v2-12b",
        "stablelm-tuned-alpha-7b": "stabilityai/stablelm-tuned-alpha-7b",
        "open_llama_13b": "openlm-research/open_llama_13b",
        "vicuna-13b-v1.3": "lmsys/vicuna-13b-v1.3",
        "koala": "young-geng/koala",
        "mpt-7b": "mosaicml/mpt-7b",
        "mpt-7b-storywriter": "mosaicml/mpt-7b-storywriter",
        "mpt-30b": "mosaicml/mpt-30b",
        "opt-66b": "facebook/opt-66b",
        "opt-iml-max-30b": "facebook/opt-iml-max-30b",

    }

    SUPPORT_AGENT_MODEL = [
        "openai-api",  # GPT4 模型
        "qwen-api",  # Qwen Max模型
        "zhipu-api",  # 智谱AI GLM4模型
        "Qwen",  # 所有Qwen系列本地模型
        "chatglm3-6b",
        "internlm2-chat-20b",
        "Orion-14B-Chat-Plugin",
    ]
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/managed-by: Helm
    app.kubesphere.io/instance: chatglm-bbmn3p
  name: model-config-map

部署启动后报错信息如下:

2024-06-11 02:04:00 | ERROR | stderr | INFO:     Waiting for application startup.
2024-06-11 02:04:00 | ERROR | stderr | INFO:     Application startup complete.
2024-06-11 02:04:00 | ERROR | stderr | INFO:     Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-06-11 02:04:01 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker c30eb17d ...
2024-06-11 02:04:01 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one.
2024-06-11 02:04:01 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one.
2024-06-11 02:04:01 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one.
Loading checkpoint shards:   0%|          | 0/7 [00:00<?, ?it/s]
Loading checkpoint shards:  14%|█▍        | 1/7 [00:00<00:02,  2.78it/s]
Loading checkpoint shards:  29%|██▊       | 2/7 [00:00<00:01,  2.82it/s]
Loading checkpoint shards:  43%|████▎     | 3/7 [00:01<00:01,  2.84it/s]
Loading checkpoint shards:  57%|█████▋    | 4/7 [00:01<00:01,  2.85it/s]
Loading checkpoint shards:  71%|███████▏  | 5/7 [00:01<00:00,  2.87it/s]
Loading checkpoint shards:  86%|████████▌ | 6/7 [00:02<00:00,  2.85it/s]
Loading checkpoint shards: 100%|██████████| 7/7 [00:02<00:00,  3.15it/s]
Loading checkpoint shards: 100%|██████████| 7/7 [00:02<00:00,  2.97it/s]
2024-06-11 02:04:04 | ERROR | stderr | 
2024-06-11 02:04:08 | INFO | model_worker | Register to controller
INFO:     Started server process [735]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)
ERROR; return code from pthread_create() is 11
        Error detail: Resource temporarily unavailable

大家是否有类似场景,或解决过类似问题吗?欢迎一起交流讨论。

yongxingMa commented 2 months ago

问题已解决在这里记录一下吧 通过docker stats命令查看,PID进程一直在900多,不超过1000,而正常运行的容器都是1200多。 查看配置文件,原因是被K8S的kubelet 配置限制了,podPidsLimit: 设置了1000. 修改podPidsLimit: 10000后正常运行了。参考下面的配置文件 vi /etc/kubernetes/kubeadm-config.yaml

kind: KubeletConfiguration
cgroupDriver: systemd
clusterDNS:
- 169.254.25.10
clusterDomain: cluster.local
evictionHard:
  memory.available: 5%
  pid.available: 10%
evictionMaxPodGracePeriod: 120
evictionPressureTransitionPeriod: 30s
evictionSoft:
  memory.available: 10%
evictionSoftGracePeriod:
  memory.available: 2m
featureGates:
  CSIStorageCapacity: true
  ExpandCSIVolumes: true
  RotateKubeletServerCertificate: true
  TTLAfterFinished: true
kubeReserved:
  cpu: 200m
  memory: 250Mi
maxPods: 110
podPidsLimit: 10000
rotateCertificates: true
systemReserved:
  cpu: 200m
  memory: 250Mi

使用kubeadm init phase kubelet-start 重启