-
The goal of this segment is to create meaningful benchmark subsets with a minimal set of tasks.
I believe the steps are as follows:
1) construct an experimental subset. If people agree I can con…
-
I set up everything like the documentation said and when I run 'npm run dev' to run my local server, the ui give me this error when I try to chat with the bot: Cannot read properties of undefined (rea…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
**问题描述 / Problem Description**
试用agent对话报错:peer closed connection without sending complete message body (incomplete chunked read
```
base_url = "http://127.0.0.1:7861/chat"
tools = list(requests…
-
Thank you for your excellent work! I would like to ask for your assistance with reproducing the results of Molecule-Text Retrieval for PCDes. However, I seem to be encountering some issues.
I have …
-
单独测试chatGLM2-6b模型(不涉及RAG,embedding等等,即简单的输入prompt调用模型),在1200个token的时候回答要4s,而在3800个token的时候,回答要28s。但是使用本项目,不论token多长(即不论设置的匹配知识条数有多少条),模型总是能在1s内流式输出结果,请问这是做了什么优化能让模型的回答速度提升这么多呢?
本来以为是上下文压缩减少了token数量,但…
-
Say, I have my own dataset on which I want to benchmark on. It would be great if there is a native support to carry out the same.
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
### Feature request
Can inference be performed with larger batch dimensions?
Currently, tokenisation is supported upto List[List[str]] and any dimension higher than that needs to be traversed via a …