Open alan-ai-learner opened 1 year ago
Have you tried running the model multiple times but just querying one specific part of the format and then combining them afterwards? Example:
I haven't tested these prompts, so you might have to do some more testing.
I would guess that a language model will be better at completing your smaller tasks if it doesn't have to 'keep track' of all of your requirements and will give more consistent output. You might have to add something like "only write bullet points" to prevent a small text from generating before your desired output.
As far as I can tell from the code, context length is the maximum number of token the model sees and combines both prompts and past answers are put in, so prompt + output as with gpt3, but shorter.
@biosfood thank you so much for answering! I'll try this and let you know , how it went.
Have you tried running the model multiple times but just querying one specific part of the format and then combining them afterwards?
@biosfood , As my trasnscripts are longer than the max context length, I created chunks of the transcripts. I'm trying to generate all three things (topics, summary, action points) for each chunk at once, and the end i'm combining them.
Working on your suggesstion.
Hi @infwinston @Mearman @zhisbug @jegonzal @Shawnlu25 I'm trying to generate meeting minutes using vicuna-13b, using a chunk from my meeting transcript (due to context size restrictions I'm creating chunks of the meeting transcripts and passing one by one.) Here is the expected format i want and it is generated by the vicuna ..
But this behaviour changes when i'm passing next chunk of the transcript and so on...