Closed 2mawi2 closed 1 year ago
Hey! Thanks for sharing feedback with the community!
Maybe, this could be related to your other idea of having a router to select the correct few-shot prompt to send, given the context. E.g. is the user asking about git commands, we could add a few related few-shot prompts for git commands etc?
If we want todo this properly we would require some high level component that generates our prompt. From there we could tuning it into different directions. I would argue that less context could be more for the fine-tuning. We could for example have a generic amount of few shot examples Z and then include only a subset of Z in order to finetune the prompt in the specific direction (e.g. git).
Since GPT is a few-shot learner model see: Language Models are Few-Shot Learners we can significantly improve the model performance by providing examples instead of only providing context in a zero shot approach.