We have developed a prompt & model that generates 5 labels for each claim:
understandability, type_of_claim , type_of_medical_claim , support , harm
We should switch the model used in the MVP to use this version instead, as a precursor to fixing #61 as it will allow us to display and sort by the degree of checkworthiness.
We potentially have the option of using a fine-tuned Gemini model here, or using in-context learning (i.e. putting the training data in the prompt). The results should be broadly similar so let's deploy whatever is simplest to deploy!
Requirements
[ ] switch the model in vertex.py/generate_reponse() to the new model
[ ] make sure the output (a JSON object) is stored in the inferred_claims table - the output will need to be converted from JSON to a single string first
Overview
We have developed a prompt & model that generates 5 labels for each claim: understandability, type_of_claim , type_of_medical_claim , support , harm
We should switch the model used in the MVP to use this version instead, as a precursor to fixing #61 as it will allow us to display and sort by the degree of checkworthiness.
We potentially have the option of using a fine-tuned Gemini model here, or using in-context learning (i.e. putting the training data in the prompt). The results should be broadly similar so let's deploy whatever is simplest to deploy!
Requirements
vertex.py/generate_reponse()
to the new modelinferred_claims
table - the output will need to be converted from JSON to a single string first