Closed lintangsutawika closed 2 years ago
Example to run eval on XNLI
CHECKPOINT_PATH="bigscience/bloom-350m"
OUTPUT_DIR="bloom-xnli"
dataset_name="xnli"
template_config_name="en"
dataset_config_name="fr"
python t-zero/evaluation/run_eval.py \
--dataset_name $dataset_name \
--dataset_config_name $dataset_config_name \
--template_config_name $template_config_name \
--model_name_or_path $CHECKPOINT_PATH \
--output_dir $OUTPUT_DIR \
--template_name 'GPT-3 style'
@thomasw21
Looks good to me besides the fact that prompts are english prompts (I expected to have them in their languages).
Looks good to me besides the fact that prompts are english prompts (I expected to have them in their languages).
i think this is fine. A bunch of the prompts coming from the eval hackathon are code switching
Thanks @lintangsutawika
added template_config_name arg so that the dataset and prompt template source can be different, example: prompt from XNLI En but data to eval from XNLI Fr