caretech-owl / gerd

Generating and evaluating relevant documentation (GERD)
https://towardsdatascience.com/running-llama-2-on-cpu-inference-for-document-q-a-3d636037a3d8
MIT License
4 stars 0 forks source link

[UC]: QA: The model's answer is checked trough a fact-checking procedure (if it is technically possible) #18

Closed Depo14 closed 2 months ago

Depo14 commented 12 months ago

Summary

The model's answer is checked trough a fact-checking procedure to give a tendency about the correctness and credibility of the answer. Optionally, this could be displayed to the user on the website with, for example, a green, yellow and red dot.

Rationale

The fact-checking is built in to reduce the risk of incorrect and untrue answers

Level

subfunction

Actors

QA_Service

Preconditions

The model must have generated a response

Postconditions

For wrong and incorrect answers, an erorr message or waring must be outputed instead of the answer or with the answer. Optionally, the factcheck result can be displayed, for example as, a green, yellow or red dot.

Basic Flow

  1. The modelanswer is generated
  2. The factcheck result is generated
  3. With a good result: The modelanswer is printed
  4. Optionally a green dot is displayed

Alternative Paths

  1. The modelanswer is generated
  2. The factcheck result is generated
  3. With an unsafe result: A warning is printed with or without the answer
  4. With a bad result: A warning is printed with or without the answer
  5. Optionally a red dot is displayed

Visualisation

flowchart LR;
  1[The modelanswer is generated]-->2[The factcheck result is generated]
  2  -- With a good result-->3.1[The modelanswer is printed]
  2  -- With a unsafe result-->3.2[A warning is printed with or without the answer]
  2  --With a bad result-->3.3[A warning is printed with or without the answer]
  3.1-->4.1[Optionally a green dot is displayed]
  3.2-->4.2[Optionally a orange dot is displayed]
  3.3-->4.3[Optionally a red dot is displayed]

Other related issues, use cases, features

16 #17