Closed alena-m closed 4 months ago
Hi, Observe that you modify the annotator to be an LLM estimator. However, the prompt for the annotator asks the model to classify 'Yes' or 'No', where the ranker labels are '1','2',..,'5' (see the config_ranking label_schema). In this case, the annotator provides non-existing labels which results in this error.
An example of a valid instruction for your task: "Analyze the following movie review, and provide a score between 1 to 5"
One more thing, I see that you using gpt-3.5 for the meta-prompts (and the annotator). This should not work well, especially for the generation tasks, it's important to use GPT-4/4.5 to get optimal performances.
Thanks! It works! This example is worth adding to the documentation.
Hi @Eladlev , I was working on a generation task. I followed the instructions that in config_generation:
annotator:
method : ''
and in config_default:
method: 'llm'
config:
llm:
type: 'OpenAI'
name: 'gpt-4'
instruction:
"Assess this generated message,
1. does it align with the intent of user input,
2. does it rephrase user input,
If all the answers are Yes, then response '1', otherwise response '0'"
num_workers: 5
prompt: 'prompts/predictor_completion/prediction.prompt'
mini_batch_size: 1
mode: 'annotation'
Is it expected that in the dump/generator/dataset.csv, the 'prediction' and 'score' are all blank? And can you suggest me the role of 'annotator' in generation tasks?
Thank you
Hi @danielliu99,
generation_config_params.eval.function_params.instruction = ranker_config_params.annotator.config.instruction
Hi, with the latest changes I got a new error when run
run_generation_pipeline.py
config_ranking.yml
andconfig_generation.yml
are not modified.config_default.yml
isI run command