Open WAILMAGHRANE opened 3 months ago
I dont know how to increase but I'm using the system
prop of the ai
package to limit the response. The result is pretty good.
const result = await streamText({
model: openaiModel("gpt-4"),
system:
`You are a virtual assistant.` +
`Your answers can not have more than 100 words.`,
messages,
});
Yes, it's good, but I have a report that exceeds 1000 tokens, and I need to increase the token limit to accommodate the entire report.
I dont know how to increase but I'm using the
system
prop of theai
package to limit the response. The result is pretty good.const result = await streamText({ model: openaiModel("gpt-4"), system: `You are a virtual assistant.` + `Your answers can not have more than 100 words.`, messages, });
Yes, it's good, but I have a report that exceeds 1000 tokens, and I need to increase the token limit to accommodate the entire report.
Hi, does anyone have any ideas or solutions to increase the response token length?