nvms / wingman

Your pair programming wingman. Supports OpenAI, Anthropic, or any LLM on your local inference server.
https://marketplace.visualstudio.com/items?itemName=nvms.ai-wingman
ISC License
61 stars 10 forks source link

Add configuration for openai response type: stream or buffer #14

Closed nvms closed 8 months ago

nvms commented 1 year ago

Right now we define an onProgress handler for all OpenAI requests. transitive-bullshit/chatgpt-api sees this handler and then configures fetch to receive the response in streamed chunks.

Introduce a configuration option to allow for a buffered response instead.

Relevant:

https://github.com/transitive-bullshit/chatgpt-api/blob/main/src/chatgpt-api.ts#L207