Closed ff-dh closed 3 months ago
Interesting idea... One complexity is that the prompt is used to extract claims and store them in the db; so we'd need to add versioning to the prompts to keep track of which prompt generated which set of results.
I see the prompt as part of the model. Elsewhere, we don't let users (of any kind) swap between models - we just present them with the single 'best' one.
I recently came across promptfoo which we might look into for comparing prompts - but that'll be for the AI team to use before pushing the best prompt into production.
Currently the prompt given to gemini is sourced from
src/raphael_backend_flask/prompts.py
and so hardcoded. If there were an input on the landing page for a custom prompt (pre-filled with the default), it would give data scientists (and select users?) the ability to iterate on results or demonstrate different prompts. Would this be a useful and/or desirable feature?