unlearning-challenge / starting-kit

Starting kit for the NeurIPS 2023 unlearning challenge
https://unlearning-challenge.github.io/
Apache License 2.0
379 stars 133 forks source link

Consideration of network architecture and learning algorithms for unlearning effectiveness #11

Open tantrev opened 1 year ago

tantrev commented 1 year ago

The unlearning challenge could benefit from accounting for the impact of network architecture and training methods on unlearning performance.

Some neural network architectures and training methods, like recursive cortical networks and gated linear networks/supermasks, could have a huge impact on the way the competition is run and how models are evaluated.

It would be nice if future iterations of the challenge could consider:

  1. Incorporating architectures/learning algorithms like recursive cortical networks and gated linear networks/supermasks in the starter kit
  2. Evaluating submissions based on both unlearning performance and the network architecture/learning algorithm used

This could help identify approaches that balance performance and adaptability - crucial for building AI systems that can responsibly adjust to new requirements over time. Studying how architecture and learning algorithms impact unlearnability could drive progress.

Please let me know if you would like me to expand on any part of this feedback or provide more suggestions. I'm happy to discuss ways to improve future iterations of this valuable challenge.

fabianp commented 1 year ago

Hi Trevor!

Thanks for your feedback. I agree it would be nice to extend the diversity of NN architectures, and we'll consider this for future editions. For this first edition we aimed to keep it simple by focusing on standard vision problems and architectures, but we would like to have a more diverse set of benchmarks if the competition gets renewed.

Kind regards

On Fri, Jul 7, 2023 at 8:25 PM Trevor Tanner @.***> wrote:

The unlearning challenge could benefit from accounting for the impact of network architecture and training methods on unlearning performance.

Some neural network architectures, like recursive cortical networks https://towardsdatascience.com/understanding-rcns-structure-ec4b51b9c257 and gated linear networks/supermasks https://www.numenta.com/blog/2021/02/04/why-neural-networks-forget-and-lessons-from-the-brain/, could have a huge impact on the way the competition is run and how models are evaluated.

It would be nice if future iterations of the challenge could consider:

  1. Incorporating architectures like recursive cortical networks and gated linear networks/supermasks in the starter kit
  2. Evaluating submissions based on both unlearning performance and the network architecture chosen

This could help identify approaches that balance performance and adaptability - crucial for building AI systems that can responsibly adjust to new requirements over time. Studying how architecture impacts unlearnability could drive progress.

Please let me know if you would like me to expand on any part of this feedback or provide more suggestions. I'm happy to discuss ways to improve future iterations of this valuable challenge.

— Reply to this email directly, view it on GitHub https://github.com/unlearning-challenge/starting-kit/issues/11, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACDZB7EAC23MHHIEHBOXWDXPBICPANCNFSM6AAAAAA2CEO6BA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

Silur commented 1 year ago

Having information about the training parameters for the re-train model such as number of epochs and optimizer settings is still needed as we have to compare unlearning time with the re-training time to evaluate effectiveness.