Closed pilot-j closed 2 months ago
Great Work. The following are the minor changes that needs to be done:
- Update the
requirements.txt
file.- Remove unused import statements.
- Remove unneccessary arguments.
- If possible, write the datatypes of the arguments like other files
translation/verification.py
file.
Necessary changes done. Please review and let me know.
Evaluation Script: Prompt Quality Assessment
Use
eval_script.py
to evaluate the quality of responses based on custom prompts.Arguments:
--base_path
: Directory for output CSV reports.--eval_csv
: Path to the evaluation metrics CSV.--prompt_file
: Path to a.txt
file containing a list of prompts.Prompt
Class DescriptionThe
Prompt
class is designed to encapsulate and structure the information needed for generating and evaluating language model prompts. It consists of three key attributes:translate_to
: (String) Specifies the target language or action for the prompt. For example,"Hindi"
or"Spanish"
.preamble
: (String) Provides the introductory text or instructions that guide the interpretation of themessage
. For example,"Translate"
or"Translate to Spanish"
.message
: (String) Contains the main content or input text that needs to be processed or translated as per thetranslate_to
instruction.Example Usage:
Given a prompt like:
This creates a prompt where the input text
"Input"
is instructed to be translated to Hindi. ThePrompt
class structures this data to be used effectively within the evaluation script.Prompt Format: Each prompt is a list of three strings:
translate_to
,preamble
, andmessage
, wrapped into aPrompt
class. Example prompt file content:Output: Generates evaluation reports named as:
Example:
prompt_0_Hindi_eval_report.csv
Usage: