-
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
https://arxiv.org/abs/2405.15234
-
Hi, thanks for your work. I also agree with the value of LLM unlearning in privacy-preserving since I also work on the unlearning of Federated Learning scenario. However, I just tested a simple case a…
-
Is the evaluation metric public?
Please share how the evaluation metric is computed
-
Traceback (most recent call last):
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/node_classifier.py", line 203, in
graphsage.train_model()
File "/home/gfq/code/privacy/G…
-
Hello,
I was investigating the topic of Machine Unlearning on Diffusion models and realized your paper and I liked the effort. :D
Just wanted to remind you that you didn't add the citation at the …
-
Dear author,
Sorry to bother you in such a way, for I cannot find your exact email address. I am following up your interesting work "**Has Approximate Machine Unlearning been evaluated prop…
-
Hello,
I have a problem which I don't know where to start on. The start of training is fine where the reward increases mostly with every episode, and the car gets further and further away before le…
-
I am trying to reproduce the results in Figure 8 of the paper and getting the truth ratio as 1.0 for forget set (5%) for all the unlearning steps until 30. Can you please help me with this? @zhilif an…
-
e.g. Stanford review/blog, other evaluation/forgetting methods, ...
Be able to justify the choice made/alterantives.
-
Updated 2024-07-07.
Datasets:
- Used for evaluation:
- MMLU: https://huggingface.co/datasets/hails/mmlu_no_train
- ARC-Challenge: https://huggingface.co/datasets/allenai/ai2_arc
- HellaS…