-
### Library name
Azure.AI.OpenAI
### Please describe the feature.
HuggingFace chat completion streaming API is designed to imitate OpenAI streaming response. However, due to a couple of minor…
dluc updated
4 months ago
-
### 🚀 The feature
TorchServe supports streaming response for both HTTP and GRPC endpoint.
- [ ] #2186
- [ ] #2232
### Motivation, pitch
Usually the predication latency is high (eg. 5sec…
-
I used the streaming code example from https://tda-api.readthedocs.io/en/latest/streaming.html
But I received this error:
tda.streaming.UnexpectedResponseCode: unexpected response code: 3, msg is …
-
### Problem Description
You'll get a timeout error if no data is returned from the API after about 25 seconds when deploying the application in Vercel Hobby.
### Solution Description
Vercel Functio…
-
### Describe the bug
The Python examples in [the wiki docs](https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API) are out of date, and crash with various errors.
For e…
-
## CVE-2024-43799 - Medium Severity Vulnerability
Vulnerable Library - send-0.17.1.tgz
Better streaming static file server with Range and conditional-GET support
Library home page: https://registry.…
-
Currently, it appears that only server-streaming (and when #66 is complete, client-streaming) methods support cancellation. However, cancellation is also useful for unary request/response methods. I.e…
-
I've read a few of these issues about this already, but I was still wondering some things about adding some sort of 'fire-and-forget' type of call. I've created a simple server where clients can subsc…
-
When working with deeply nested code, the sidebar view is not easy to read. I think it would be better if the code was left-trimmed so that there are no columns of pure whitespace.
-
Hi all, I want to retrieve the source nodes from a stream response of ContextChatEngine.chat. When I look at the code, I observe that each chunk of the stream response has the source nodes attached to…