labring / FastGPT

FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, letting you easily develop and deploy complex question-answering systems without the need for extensive setup or configuration.
https://tryfastgpt.ai
Other
17.46k stars 4.69k forks source link

猜你想问接口 /api/core/ai/agent/createQuestionGuide 报500错误 #2653

Closed Jie2GG closed 1 month ago

Jie2GG commented 1 month ago

例行检查

你的版本

问题描述, 日志截图 在简单应用打开猜你想问功能后,问答不生效。 HTTP响应:

image

返回内容

{
    "code": 500,
    "statusText": "",
    "message": "bad response status code 400 (request id: 2024090909143933861328236420945)",
    "data": null
}

fastGPT日志:

[Info] 2024-09-09 09:14:22 finish completions {"source":"fastgpt","teamId":"66d90b95f024b4727ad0e712","totalPoints":0}
[Warn] 2024-09-09 09:14:22 Request finish /api/core/chat/chatTest, time: 16460ms 
Warning: data for page "/app/detail" (path "/app/detail?appId=66d9167ff024b4727ad1d65e&currentTab=publish") is 167 kB which exceeds the threshold of 128 kB, this amount of data can reduce performance.
See more info here: https://nextjs.org/docs/messages/large-page-data
[Error] 2024-09-09 09:14:39 Api response error: undefined, bad response status code 400 (request id: 2024090909143933861328236420945) 
{
  message: '400 bad response status code 400 (request id: 2024090909143933861328236420945)',
  stack: 'Error: 400 bad response status code 400 (request id: 2024090909143933861328236420945)\n' +
    '    at tQ.generate (/app/projects/app/.next/server/chunks/61638.js:19:87222)\n' +
    '    at rd.makeStatusError (/app/projects/app/.next/server/chunks/61638.js:19:78991)\n' +
    '    at rd.makeRequest (/app/projects/app/.next/server/chunks/61638.js:19:79914)\n' +
    '    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n' +
    '    at async u (/app/projects/app/.next/server/pages/api/core/ai/agent/createQuestionGuide.js:1:2652)\n' +
    '    at async p (/app/projects/app/.next/server/pages/api/core/ai/agent/createQuestionGuide.js:1:5652)\n' +
    '    at async K (/app/node_modules/.pnpm/next@14.2.5_@babel+core@7.24.9_react-dom@18.3.1_react@18.3.1__react@18.3.1_sass@1.77.8/node_modules/next/dist/compiled/next-server/pages-api.runtime.prod.js:20:16853)\n' +
    '    at async U.render (/app/node_modules/.pnpm/next@14.2.5_@babel+core@7.24.9_react-dom@18.3.1_react@18.3.1__react@18.3.1_sass@1.77.8/node_modules/next/dist/compiled/next-server/pages-api.runtime.prod.js:20:17492)\n' +
    '    at async NextNodeServer.runApi (/app/node_modules/.pnpm/next@14.2.5_@babel+core@7.24.9_react-dom@18.3.1_react@18.3.1__react@18.3.1_sass@1.77.8/node_modules/next/dist/server/next-server.js:600:9)\n' +
    '    at async NextNodeServer.handleCatchallRenderRequest (/app/node_modules/.pnpm/next@14.2.5_@babel+core@7.24.9_react-dom@18.3.1_react@18.3.1__react@18.3.1_sass@1.77.8/node_modules/next/dist/server/next-server.js:269:37)'
}

复现步骤

  1. 创建简单应用
  2. 选择模型(使用的是本地 xinference 部署的 qwen2-72b-instruct-awq)
  3. 向模型提问

预期结果 猜你想问功能正常

相关截图 简单应用配置:

image

OneAPI配置:

image

Xinference配置:

image
c121914yu commented 1 month ago

调用模型报错,自行检查模型

Jie2GG commented 1 month ago

调用模型报错,自行检查模型

模型确认没有问题,xinference启动,使用 vllm 作为后端,实测其余的功能(工具调用,AI对话,关键词提取等)均正常,只有猜你想问接口报错。

c121914yu commented 1 month ago

可以直接看源码里的请求来模拟,就一小段提示词

Jie2GG commented 1 month ago

可以直接看源码里的请求来模拟,就一小段提示词

直接请求 vllm 后端么?

c121914yu commented 1 month ago

可以直接看源码里的请求来模拟,就一小段提示词

直接请求 vllm 后端么?

先测 oneapi 再 vllm

Jie2GG commented 1 month ago

@c121914yu 没有学过 ts,目前检索到的猜你想问的内容只有这些,这个是 prompt 是这个吗?

image
Jie2GG commented 1 month ago

@c121914yu 我找到了 prompt,并且尝试构造了一个请求,OneAPI端有正常返回信息

image