-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Linux
### P…
-
After stream is over make sure to store the chat and have it still synced
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I have an LLM model (specifically from Anthropic) and I want to use the native tools fro…
-
I wanted to go and see the file that copilot was talking about before its response finished. It seems like links to files and methods are detected in realtime, as they appear in blue in chat, but clic…
-
I have been working on integrating Ollama tools with [oterm](https://github.com/ggozad/oterm).
When making a call to `AsyncClient.chat()` with `tools` defined and `stream=True` the response gives the…
-
```py
ChatTTS.Chat.infer(
self,
text,
stream=False,
lang=None,
skip_refine_text=False,
refine_text_only=False,
use_decoder=True,
do_text_normalization=True,
…
-
```
Traceback (most recent call last):
File "/data/anaconda3/envs/torch/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
re…
-
-
now when I use the chat functionality, it will just return the whole response at the same time.
what should I do to make the response will be display one chunk by one chunk?
-
### Confirm this is not an issue with the underlying OpenAI API
- [X] This is an issue with the Python library
### Confirm this is not an issue with Azure OpenAI
- [X] This is not an issue wi…