-
### The Feature
```
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-4o",
"metadata": {
"guardrails": {"promp…
-
https://chat.zulip.org/#narrow/stream/49-development-help/subject/realm.20audit.20log.20changes/near/628995 has some context.
The fields can be `TextField`s.
As a part of this, it would also be …
-
Hi! I have forked the repo to test the new AI SDK 3.0 and when deployed to Vercel it was crashing, giving an ``Application error`` client side after a few seconds streaming the chat response, while lo…
-
### 🧐 问题描述
1、大概是消耗了30W Token的对话,字数约等于 5W-10W
2、 此时继续对话,打字机效果吐字极慢。
```js
{
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
…
-
**Is your feature request related to a problem? Please describe.**
One of the aspects that have plagued feature development on hls.js has been the accessibility to streams that have the relevant aspe…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
```
File "/opt/homebrew/lib/python3.10/site-packages/chat_with_mlx/app.py", line 166, in chatbot
response = client.chat.completions.create(
File "/opt/homebrew/lib/python3.10/site-packages/…
-
Hi,
I've tried creating an agent using an openAI Assistant as the LLM. It joins the room and works as expected until after the it's first utterance. After speaking the string I pass into the agent.…
-
When chatting with an LLM, sometimes dir-assistant sends too much context to the llm.
Counts appear correct on dir-assistant's end. Perhaps this is because in some cases, the embedding models's tok…
-
There's an example of a chat client using cluster pubsub here
https://github.com/typesafehub/activator-akka-clustering/blob/master/src/main/scala/chat/ChatClient.scala
We should leverage the exa…