-
Currently, the AI class generates only batch completions. So we have to wait until the whole completion is generates until we can send it back to user. A common way to improve UX is to stream generate…
-
Currently, the `AI` class generates only batch completions. So we have to wait until the whole completion is generates until we can send it back to user. A common way to improve UX is to stream genera…
-
Currently, AI class handles only batch completions. To improve UX it should also support streaming.
-
Currently, `AI` class handles only batch completions. To improve UX it should also support streaming.
-
### Describe the bug
```c#
var response = await _client.InvokeModelWithResponseStreamAsync(request);
//there are two ways to get the streaming events. one is enumeratio…
-
### Describe the feature
Add support for using lambda response streaming when returning streams in a dotnet lambda function.
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-st…
-
It would be great if LLRT supported Lambda response streaming as per https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html#runtimes-custom-response-streaming.
-
Enable streaming on the solution.
-
### Is your feature request related to a problem? Please describe.
The result from `IAgent.GenerateResponse` and `IStreamingAgent.GenerateStreamingResponse` is not consistent with each other, whic…
-
### System Info
Attempting to reuse an existing OpenAI client to stream responses from HF endpoint doesn't work due to a couple of differences. In my case the differences break the .NET client in Azu…