Open HugoTex98 opened 2 years ago
As you wrap a keras model into an explainer, it is up to you if you only want to analyse your latent space after your encoder or if you want to see what the reconstructions explained: Do you want to explain single dimensions in your latent space or do you want to explain single reconstructions in your reconstructions.
Most other applications have a classic supervised setup, where the predicted class (logits/score) is explained.
Thanks for your reply @enryH !
If I want to explain the reconstructions of my inputs, do I need to change anything in the normal configuration of my LRP analyzer?
Normally it is advised (in papers) to look at the logits, so you need to remove the activation. What loss are you using?
Yes, I already removed the activation function! For the loss I'm using the MSE @enryH
@HugoTex98 If you like you can also check out our paper https://arxiv.org/abs/1910.13140 that goes beyond explaining single dimensions in the latent space.
Hello everyone!
I decided to use this amazing toolbox in my Autoencoder model but I'm having doubts about how it works in this type of models...
How will relevance scores be calculated in this case? Is it done in my encoder or when input is reconstructed?
Can anyone help me in this question?
My model is the following: `
`