-
Use GPT-4 to generate code and a faster model with more context like `claude-instant-v1-100k`.
User should be able to define their own semantic checking model and the metaprogramming API should optio…
-
Hello guys.
How can I use OpenAI GPT-4 version? Is that not free version?
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Linux…
-
Hi, when i use medusa decoding on trtllm-090 which profiling, error occrued as follows. Could you please help to have a look? Thanks!
If i do not use `--run_profiling`, the inference process is nor…
-
1. Collect money (~500 USD should be enough I believe?)
2. Open account with OpenAI and connect it to the bank account holding the money above
3. Get API key
4. Run the examples against GPT-4 to co…
-
```[tasklist]
### Roadmap
- [x] #2
- [ ] #3
- [ ] #4
- [ ] #5
- [ ] #6
- [ ] Conversation tail summary for model token limits
- [ ] Named conversation route
- [ ] https://github.com/drivly/gp…
-
Help .. increase gpt4 model maximum tokens more than 8192 PLEASE in https://nat.dev/ website . need more than 8192 please ... if can make it 16000
-
Currently, all MemGPT server configuration is contained in a config YAML file which is baked in the docker container. However, the container and configuration of the server should be separate. This is…
-
### 💻 系统环境
Windows
### 📦 部署环境
Docker
### 🌐 浏览器
Chrome
### 🐛 问题描述
GPT-4o的流式传输,显示速度更不上传输速度。
GPT-4o的输出速度奇快
ui一开始的流式传输显示速度是较慢的,几秒后ui显示速度会突然变快,估计是api那里已经传输完毕了,但是显示还没显示完全。
webui显示速度的设置值是一个预设值吧?而非实…
-