Read the official documentation.
This project extends OpenAI's API to support streaming chat completions on both the server (Node.js) and client (browser).
Note: This is an unofficial working solution until OpenAI adds streaming support. This issue is being tracked here: How to use stream: true? #18.
If this project helped you, please consider buying me a coffee or sponsoring me. Your support is much appreciated!
npm i openai-ext
Use the following solution in a browser environment:
import { OpenAIExt } from "openai-ext";
// Configure the stream (use type ClientStreamChatCompletionConfig for TypeScript users)
const streamConfig = {
apiKey: `123abcXYZasdf`, // Your API key
handler: {
// Content contains the string draft, which may be partial. When isFinal is true, the completion is done.
onContent(content, isFinal, xhr) {
console.log(content, "isFinal?", isFinal);
},
onDone(xhr) {
console.log("Done!");
},
onError(error, status, xhr) {
console.error(error);
},
},
};
// Make the call and store a reference to the XMLHttpRequest
const xhr = OpenAIExt.streamClientChatCompletion(
{
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Tell me a funny joke." },
],
},
streamConfig
);
// If you'd like to stop the completion, call xhr.abort(). The onDone() handler will be called.
xhr.abort();
Use the following solution in a Node.js or server environment:
import { Configuration, OpenAIApi } from 'openai';
import { OpenAIExt } from "openai-ext";
const apiKey = `123abcXYZasdf`; // Your API key
const configuration = new Configuration({ apiKey });
const openai = new OpenAIApi(configuration);
// Configure the stream (use type ServerStreamChatCompletionConfig for TypeScript users)
const streamConfig = {
openai: openai,
handler: {
// Content contains the string draft, which may be partial. When isFinal is true, the completion is done.
onContent(content, isFinal, stream) {
console.log(content, "isFinal?", isFinal);
},
onDone(stream) {
console.log('Done!');
},
onError(error, stream) {
console.error(error);
},
},
};
const axiosConfig = {
// ...
};
// Make the call to stream the completion
OpenAIExt.streamServerChatCompletion(
{
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Tell me a funny joke.' },
],
},
streamConfig,
axiosConfig
);
If you'd like to stop the completion, call stream.destroy()
. The onDone()
handler will be called.
const response = await OpenAIExt.streamServerChatCompletion(...);
const stream = response.data;
stream.destroy();
You can also stop completion using an Axios cancellation in the Axios config (pending #134).
Under the hood, the function OpenAIExt.parseContentDraft(dataString)
is used to extract completion content from a data string when streaming data in this library.
Feel free to use this if you'd like to handle streaming in a different way than this library provides.
The data string contains lines of JSON completion data starting with data:
that are separated by two newlines.
The completion is terminated by the line data: [DONE]
when the completion content can be considered final and done.
When passed a data string, the function returns completion content in the following shape:
{
content: string; // Content string. May be partial.
isFinal: boolean; // When true, the content string is complete and the completion is done.
}
If you're using this library for streaming completions, parsing is handled for you automatically and the result will be provided via the onContent
handler callback documented above.
Type definitions have been included for TypeScript support.
Favicon by Twemoji.
Open source software is awesome and so are you. 😎
Feel free to submit a pull request for bugs or additions, and make sure to update tests as appropriate. If you find a mistake in the docs, send a PR! Even the smallest changes help.
For major changes, open an issue first to discuss what you'd like to change.
If you found this project helpful, let the community know by giving it a star: 👉⭐
See LICENSE.md.