CodingTrain / Bizarro-Devin

12 stars 4 forks source link

fine tuning model? #82

Open shiffman opened 3 months ago

shiffman commented 3 months ago

Just a reminder here for me to experiment with a fine-tuned model rather than out of the box LLM. . . though I think the RAG is working well for now for custom personality, language, and info.

dipamsen commented 3 months ago

Currently for RAG, the following prompt is used:

Dan says: {prompt}
Additional context you can use from most relevant to less relevant:
- {context1}
- {context2..10}

Current code in the editor:
``​`
{currentCode}
``​`

For it to better capture "custom personality, language, and info", we could change the prompt to tell it that those words are said by him, and/or to mimic the style of the text. (Although, something to note is that this additional context is only available during chatting, so it may not inherit these features during code explanation).

Also we could think about moving the Prompt + Code below the other context, if it has any difference.