Open itq5 opened 5 months ago
我使用了GPT4&3.5转发的API ,也出现了一模一样的问题,
你的问题解决了吗?
我使用了GPT4&3.5转发的API ,也出现了一模一样的问题,
请问解决了吗
我本地打个补丁,感觉可能有点用。
--- a/XAgent/ai_functions/request/obj_generator.py
+++ b/XAgent/ai_functions/request/obj_generator.py
@@ -189,6 +189,8 @@ class OBJGenerator:
if 'function_call' not in response['choices'][0]['message']:
logger.typewriter_log("FunctionCallSchemaError: No function call found in the response", Fore.RED)
raise FunctionCallSchemaError(f"No function call found in the response: {response['choices'][0]['message']} ")
+ if not response['choices'][0]['message']['function_call']:
+ return response
# verify the schema of the function call if exists
function_schema = list(filter(lambda x: x['name'] == response['choices'][0]['message']['function_call']['name'],req_kwargs['functions']))
感觉还是不行,应该是代码逻辑有问题。 function_call = None 传下去之后,发到 open_ai 的 API 会
{ ... "function_call": null }
从openai 文档看,这样调用是非法的。很可能得到的结果是 和
{ ... "function_call": "none" }
相同,而不是 "auto" , 而 "auto" 应该是预期的值。
我发现了这个问题的根本原因了。 XAgent 通过 在 functions 里面提供两个函数
subtask_submit
subtask_handle
来让LLM选择一条,而subtask_handle 的内容需要调用 System prompt的Tools列表中的一项。 LLM 并没有按照调用预期,通过 function的返回来填充内容,反而直接把 tool的调用嵌入到了文本输出里面了。
> console.log(x.output.choices[0].message.content)
Given the information we have gathered and the preliminary structure of the "integration_technology_draft.md" file, the next step is to fill in each section with the summarized information. The information collected about NiceGUI, LangChain, OpenAI, Zhipuai, and Baichuanai provides a solid foundation for writing comprehensive documentation. This documentation will detail the key features, integration potentials, and challenges of integrating these technologies.
We begin by prioritizing the completion of the first section on NiceGUI, following the established outline. This step is logical as it allows for a focused and cohesive compilation of information. After completing the NiceGUI section, we will continue to fill in the rest of the sections systematically. This approach ensures efficiency and thoroughness, adhering to the task's objectives.
The critical information on NiceGUI extracted from the search results highlights its user-friendly interface, efficient connection management, automation in reconnection, and its suitability for tasks requiring continuous interaction. This information directly maps onto the designated subsections of key features, integration potential, and challenges.
Given the amount of text to be generated and the need for precise formatting, it is appropriate to utilize the FileSystemEnv to write to the document directly.
**Action Plan**: Populate the NiceGUI section in the "integration_technology_draft.md" file with the following points:
- Key Features: User-friendly interface, efficient connection management, reconnection automation, suitable for tasks requiring continuous interaction.
- Integration Potential: High compatibility with devices like webcams and GPIO pins in IoT setups, streamlines code management.
- Challenges: Potential challenges include ensuring stability in varied application scenarios and managing real-time interaction complexities.
**Tool Call:**
```json
{
"tool_name": "FileSystemEnv_write_to_file",
"tool_input": {
"filepath": "integration_technology_draft.md",
"content": "\n### NiceGUI\n- **Key Features:** User-friendly interface, efficient connection management, automation in reconnection, and suitability for tasks requiring continuous interaction. Compatible with peripheral devices and streamlines code management across applications.\n- **Integration Potential:** High due to its ability to enhance user experience with a simplified interface and efficient management of connected devices and real-time data streams in IoT and smart applications.\n- **Challenges:** Ensuring application stability across various use cases and managing the complexities of real-time data interaction and user-input processing.",
"line_number": 4,
"overwrite": true
}
}
This action aims to incrementally complete the documentation by elaborating on NiceGUI, setting a precedent for detailing the other technologies in subsequent steps.
console.log(x.output.choices[0].message.function_call) null
不知为何这么设计,感觉把 tools调用放到 function 列表(或者 tools列表)里面,让LLM多选1,而不是现在的两层可能会好些。
关于 OpenAI的API , 发现一个问题
提供一组 functions / tools , "auto" 模式下,API的返回不能确保一定会选择一个 function / tool 。
换句话说,OpenAI提供三种函数/工具调用模式
但是缺少一种模式:
而XAgent提示词设计似乎就认为 LLM 的输出会使用这个缺失的模式。
Issue Description / 问题描述
使用One-api项目部署的api,在执行Task时,能看到one-api有做出相应和回复,但是XAgent执行报错了。
Steps to Reproduce / 复现步骤
Expected Behavior / 预期行为
Environment / 环境信息
Error Screenshots or Logs / 错误截图或日志
XAgent日志:
One-API日志:
assets/config.yml 配置:
Additional Notes / 其他备注
帮我检查下XAgent如何修改进行适配