albermax / innvestigate

A toolbox to iNNvestigate neural networks' predictions!
Other
1.26k stars 233 forks source link

Add support for all output activations #310

Closed Rubinjo closed 1 year ago

Rubinjo commented 1 year ago

I'm working on a binary classification problem and therefore have a sigmoid activation instead of a softmax activation function for my output layer. I have adjusted the model_wo_softmax function to accept any kind of activation by giving it as an argument to also cover binary classification problems.

See here the old vs the new call:

innvestigate.model_wo_softmax(model)
innvestigate.model_wo_output_activation(model, "softmax")

I have also edited all examples and documentation that cover this function, so everything should now have this new function.

adrhill commented 1 year ago

Hi @Rubinjo, thanks for the contribution. Adding a model_wo_output_activation function sounds ok to me, however this PR needs three changes:

  1. We can't remove model_wo_softmax since this would break our users' existing code. We recently had a major breaking release and this minor change is not worth another one.
  2. I don't see a reason to update the readme and all notebooks. iNNvestigate is a package for use with classifiers and these most commonly use softmax output activation functions. If this update to the documentation was necessary, I would also suggest doing it in another PR to keep the diff small.
  3. If model_wo_output_activation is supposed to remove any activation function, an implementation that doesn't require the name of the activation function as an argument would be more elegant.
Rubinjo commented 1 year ago

Yeah, it is indeed a pretty rigorous change. I mainly made the change not to duplicate the model_wo_softmax function into multiple new similar functions. I can also implement it by only refactoring the pre_softmax_tensors method, then your first point is still satisfied and no updates to the readme and notebooks are needed.

adrhill commented 1 year ago

I would leave model_wo_softmax as is and add a model_wo_output_activation(model) function that doesn't require a string. All other changes besides added tests should be reverted.

Rubinjo commented 1 year ago

Yeah I agree. Preserving model_wo_softmax and adding a model_wo_output_activation(model) should be the end result.

Rubinjo commented 1 year ago

I have adjusted pre_softmax_tensors so now it can still use model_wo_softmax without any adjustment for the user and added a model_wo_output_activation(model) that can also use this function.

Now all three points from your earlier message should be satisfied.

adrhill commented 1 year ago

Looks good to me. Sorry for the delay! :)