Open chiayi-hsu opened 3 months ago
Hi! Thanks for bringing this up.
Have you created your own dataset? If so, please make sure it can be loaded from the path in the error. You can also upload your datasets to H, uncomment this line and comment out the current loading.
Thank you for getting back to me.
I want to evaluate the poison model's ability to handle harmful questions using a specific trigger ("SUDO"). Should I use the dataset from Anthropic/hh-rlhf (red-team-attempts)?
The code assumes you had generated the corresponding dataset.
If you want to quickly evaluate, you can:
self.data = load_dataset(f"ethz-spylab/harmless-eval-{trojan}", split=self.SPLIT)
This will only work for SUDO, that is the only dataset for which we have an evaluation dataset hosted on HF. Let me know if it works.
I haven't gotten approval for ethz-spylab/harmless-eval-SUDO dataset, so now it's still showing an error. I will try again when I can access the dataset.
Thank you!
It works!!
After getting the approval, I modify it as self.data = load_dataset(f"ethz-spylab/harmless-eval-{trojan}", split=self.SPLIT, token='USER TOKEN')
.
Thanks!!
Hello,
I would like to ask how to create an evaluation dataset. When I directly run
python evaluate_generation_model.py --model_path ../../LLM_Models/poison-7b-SUDO- --token SUDO --report_path ./results/rlhf/poison-7b-SUDO-10
, there is an error showing `FileNotFoundError: Couldn't find a dataset script at /home/ai/Chia/rlhf-poisoning/safe_rlhf/datasets/raw/datasets/data/harmless-eval-SUDO/harmless-eval-SUDO.py or any data file in the same directory.Thank you!`