Closed dearden closed 4 months ago
Looks good - but does it work?!
It did when I tried it out, but once the evaluation scripts are done, I can answer that more confidently.
Nice – looks good!
So I think the corresponding app change is this:
diff --git a/src/raphael_backend_flask/process.py b/src/raphael_backend_flask/process.py
index 4ae6f9c..c45d421 100644
--- a/src/raphael_backend_flask/process.py
+++ b/src/raphael_backend_flask/process.py
@@ -44,7 +44,7 @@ def extract_claims(run: dict) -> Iterable[dict[str, Any]]:
parsed_claim = {
"run_id": run["id"],
"claim": claim["claim"],
- "raw_sentence_text": chunk["text"],
+ "raw_sentence_text": claim["original_text"],
"labels": json.dumps(labels_dict),
"offset_start_s": float(chunk["start_offset"]),
}
Then the title text popup in the app will show the original text output of the LLM, instead of the chunk.
Fixes #95.
Updates the prompt so it gives a paraphrase of the claim alongside the direct quote.
The paraphrased version is the "claim", and the quote goes in a field called "original_text".
Pull request checklist
main