Closed rthapa84 closed 7 months ago
Hi. Can you provide the command that you used to run eval? The save directory of eval should be provided here https://github.com/locuslab/tofu/blob/main/config/forget.yaml#L23.
Hey, thank you for the prompt response. And I have not even reached to running the evals. I ran the finetune.py file and was going to run forget.py file, like was suggested in the readme. But, it seems like forget.py file loads information from data folder. But, I am not sure where exactly are the information being generated. I am specifically talking about this https://github.com/locuslab/tofu/blob/main/config/forget.yaml#L20. How do we generate these information? You directly provide the data folder in your repo.
Hey, thank you for the prompt response. And I have not even reached to running the evals. I ran the finetune.py file and was going to run forget.py file, like was suggested in the readme. But, it seems like forget.py file loads information from data folder. But, I am not sure where exactly are the information being generated. I am specifically talking about this https://github.com/locuslab/tofu/blob/main/config/forget.yaml#L20. How do we generate these information? You directly provide the data folder in your repo.
The way you generate these information is to first train and evaluate a retain model. I guess the current code does not include a way to get them without modification. I will surely update them ASAP. In the meantime, you may also try to modify this line https://github.com/locuslab/tofu/blob/main/forget.py#L164 to trainer.evaluate()
, here you can just pass the retain model. Basically you can try to evaluate without training. Does that make sense?
Hey, no worries and thank you for the great work. One last request, I know you mentioned that you will update the codebase with instructions to run evaluations, but I was wondering if there is some instruction you can give here? I basically want to understand, given a unlearned model, how do I generate the metrics that you have on the paper with model utility and forget quality.
@rthapa84 I apologize for the late response. Can you take a look at the new update on evaluation and let me know if you find that useful? Thanks!
Hey, there may be an obvious answer that I am not seeing, but wondering where are those evals inside the data folder being generated on the script? I could not find it upon my quick scan.
Thank you!