Closed juzarantri closed 1 year ago
try this const chain = loadSummarizationChain(model, { type: 'stuff' });
if your content is not too long
Thanks @arronKler for this but what to do if the content is long like 6000 words
due to the max_token limit of LLM model, SummarizationChain's default behavior is type: 'map_reduce', which means it will resolve each part of your document with calling LLM to do summarize, until reach max iteration set (which is 10) or the token count reduced enough to call final summarize.
so it indeed need time to call summarize to every part of your doc. you can try to pass a prompt parameter to loadSummarizationChain with your own summarize prompt, and your template should like this:
const template = `Write a concise summary of the following with 300 words:
"{text}"
CONCISE SUMMARY:`
const myPrompt = new PromptTemplate({
template,
inputVariables: ["text"],
})
loadSummarizationChain(model, { prompt: myPrompt})
this will limit the total summarization words count for each output, but also may reduce summarize precision either.
Thanks for your contribution
@arronKler Do you know how can i see the intermediate steps using Langchain JS as I'm not sure my prompt is being applied? I think the default prompt is always applied for some reason.
@arronKler Do you know how can i see the intermediate steps using Langchain JS as I'm not sure my prompt is being applied? I think the default prompt is always applied for some reason.
you may need this
Using this chain takes lots of time for generating out put i.e., summary
Here is the reference which I used https://js.langchain.com/docs/modules/chains/other_chains/summarization