Closed piotrmigdalek closed 1 month ago
Hi,
The current implementation of ROME is based on the original paper. It's important to retain this baseline, so I will reintroduce r-ROME as a new method in EasyEdit, currently in development.
Regarding your second question, it should be rephrase_prompts instead of rephrased_prompts. The parameter you passed is ignored because it's not recognized. You can try modifying this and see if it resolves the issue.
Hi,
Thank you, it worked.
Two more questions.
Does sequential_edit=True makes the model be evaluated after all the edits? Where can I find the specific information on how default portability and locality are computed?
Thank you very much for your time and input :)
And maybe another one.
In IKE having:
train_ds = [
{
'prompt': 'Q: The president of the US is? A:',
'target_new': 'Joe Biden',
'rephrase_prompt': 'The leader of the United State is',
'locality_prompt': 'The president of Russia is ',
'locality_ground_truth': 'Putin'
},
{
'prompt': 'Einstein specialized in',
'target_new': 'physics',
'rephrase_prompt': 'Einstein is good at',
'locality_prompt': 'Q: Which subject did Newton specialize in? A: ',
'locality_ground_truth': 'physics'
},
# add more if needed
]
what syntax should i use to add multiple locality and portability prompts?
Hi,
Thank you, it worked.
Two more questions.
Does sequential_edit=True makes the model be evaluated after all the edits? Where can I find the specific information on how default portability and locality are computed?
Thank you very much for your time and input :)
Q1: Yes
Q2: You can get the corresponding metrics by entering locality_inputs
and portability_inputs
in the format (https://github.com/zjunlp/EasyEdit?tab=readme-ov-file#baseeditor)
And maybe another one.
In IKE having:
train_ds = [ { 'prompt': 'Q: The president of the US is? A:', 'target_new': 'Joe Biden', 'rephrase_prompt': 'The leader of the United State is', 'locality_prompt': 'The president of Russia is ', 'locality_ground_truth': 'Putin' }, { 'prompt': 'Einstein specialized in', 'target_new': 'physics', 'rephrase_prompt': 'Einstein is good at', 'locality_prompt': 'Q: Which subject did Newton specialize in? A: ', 'locality_ground_truth': 'physics' }, # add more if needed ]
what syntax should i use to add multiple locality and portability prompts?
The data format for both is a dict, for each measurement dimension, you need to provide the corresponding prompt and its corresponding ground truth.
Thank you for you massive help, got everything!
Hi,
Is the current implementation of ROME is still using old implementation or r-ROME? If the old one, can I somehow access the new implementation in order to compare the results?
The second questions regards generalization metric. My code looks like this:
metrics_ROME_mistral, edited_model_ROME_mistral, _ = editor_ROME_mistral.edit( prompts=prompts, target_new=target_new, ground_truth=ground_truth, subject=subject, rephrased_prompts=rephrased_prompts_edit, keep_original_weight=False, summary_metrics=True )
The metrics have the format:
{'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': ..., 'requested_rewrite': {'prompt': "...", 'target_new': '...', 'ground_truth': '...', 'portability': {}, 'locality': {}, 'subject': '...'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}},
Metrics Summary: {'pre': {'rewrite_acc': 0.08}, 'post': {'rewrite_acc': 0.92}}
Why don't I get the reprhase_acc while providing rephrase prompts? Is post: rewrite_acc the average of each rewrite_acc after each edit?
Thank you for your time and help :)