-
### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API repeats `"role": "assistant"` in all returned chunks. This is different to OpenAI's API which just has…
-
- [ ] Criar o scraping
- [ ] Checar o numero de requisição posiveis por minuto
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
This code snippet needs documentation caesar_code_chat_gpt_files/main_needs_documentation.py
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Windows
###…
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
When running the docker instance of privategpt with Ollama, I get an error saying: TypeError: missin…
-
**Describe the bug**
By setting stream_options.include_usage to true, the token usage is supposed to be returned, but it always ends up being None.
model: azure openai gpt 4o/4omini
api version: [2…
-
### What happened?
We use custom icons for various models. Since the upgrade to version 0.7.5, the saved chats are missing custom model icons after the chat is saved. Also, the model spec is not save…
-
This file could be refactored caesar_code_chat_gpt_files/post_needs_refactoring.js
-
**helix-editor version**
24.3
**helix-gpt version**
Exact version of Helix GPT.
commit 2a047347968e (newer then 0.34)
**Describe the bug**
When trying to run code actions against copilot, I …