liyucheng09 / Selective_Context

Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
278 stars 12 forks source link

question about ContinueConversation task #24

Closed lizijian-buaa closed 6 months ago

lizijian-buaa commented 6 months ago

Hi. When reading the code, I think for the ContinueConversation task, u use all the conversion rounds to construct the input prompt, which includes the original GPT last-round reply. Is this a mis-implementation? BTW, So glad to see such good work. Thx!

liyucheng09 commented 6 months ago

Thanks for your interests on selective-context.

Actually we use the whole context except the last utterence, see here for the details.

lizijian-buaa commented 6 months ago

thx for your so quick reply, but I think the "prompt = '\n'.join(lines[:-1])" is not used in the code. Should we set sections=[prompt , last_response] instead of [content, last_response].

liyucheng09 commented 6 months ago

could you run a debug and check here. To see whether it's using the correct context?

lizijian-buaa commented 6 months ago

Oh, I see. Thx a lot.