Right now, we're building the full prompt directly, combining a system message with context and user message. That's ok for other models, but for OpenAI chat models there is a a better practice for (which is more organized and should lead to better responses) these patterns for each role (reference from langchain).
We should be using this pattern for chains using gpt.
Right now, we're building the full prompt directly, combining a system message with context and user message. That's ok for other models, but for OpenAI chat models there is a a better practice for (which is more organized and should lead to better responses) these patterns for each role (reference from langchain).
We should be using this pattern for chains using gpt.