After adding regularization to trainer, go over all datasets and experiment with every type of regularization, and check the robustness effect when training with it (Train -> Reinitialize layers -> See the effect on accuracy).
What happens to the critical\robust effect in this setting? Are the robust layers the same? Does the criticalness spread\diffuse over all layers?
@noamloya could you walk me through how this experiment works...? I want to take this ticket but I don't know what exactly it is I'm trying to measure and what phenomenon I'm looking for
After adding regularization to trainer, go over all datasets and experiment with every type of regularization, and check the robustness effect when training with it (Train -> Reinitialize layers -> See the effect on accuracy). What happens to the critical\robust effect in this setting? Are the robust layers the same? Does the criticalness spread\diffuse over all layers?