locuslab / tofu

Landing Page for TOFU
MIT License
98 stars 24 forks source link

Where are the evals inside the data folder being generated? #3

Closed rthapa84 closed 7 months ago

rthapa84 commented 9 months ago

Hey, there may be an obvious answer that I am not seeing, but wondering where are those evals inside the data folder being generated on the script? I could not find it upon my quick scan.

Thank you!

zhilif commented 9 months ago

Hi. Can you provide the command that you used to run eval? The save directory of eval should be provided here https://github.com/locuslab/tofu/blob/main/config/forget.yaml#L23.

rthapa84 commented 9 months ago

Hey, thank you for the prompt response. And I have not even reached to running the evals. I ran the finetune.py file and was going to run forget.py file, like was suggested in the readme. But, it seems like forget.py file loads information from data folder. But, I am not sure where exactly are the information being generated. I am specifically talking about this https://github.com/locuslab/tofu/blob/main/config/forget.yaml#L20. How do we generate these information? You directly provide the data folder in your repo.

zhilif commented 9 months ago

Hey, thank you for the prompt response. And I have not even reached to running the evals. I ran the finetune.py file and was going to run forget.py file, like was suggested in the readme. But, it seems like forget.py file loads information from data folder. But, I am not sure where exactly are the information being generated. I am specifically talking about this https://github.com/locuslab/tofu/blob/main/config/forget.yaml#L20. How do we generate these information? You directly provide the data folder in your repo.

The way you generate these information is to first train and evaluate a retain model. I guess the current code does not include a way to get them without modification. I will surely update them ASAP. In the meantime, you may also try to modify this line https://github.com/locuslab/tofu/blob/main/forget.py#L164 to trainer.evaluate(), here you can just pass the retain model. Basically you can try to evaluate without training. Does that make sense?

rthapa84 commented 9 months ago

Hey, no worries and thank you for the great work. One last request, I know you mentioned that you will update the codebase with instructions to run evaluations, but I was wondering if there is some instruction you can give here? I basically want to understand, given a unlearned model, how do I generate the metrics that you have on the paper with model utility and forget quality.

zhilif commented 8 months ago

@rthapa84 I apologize for the late response. Can you take a look at the new update on evaluation and let me know if you find that useful? Thanks!