I am applying LayerLRP method on a model for calculating attribution scores on the embedding layer's output by calling
llrp = LayerLRP(model, layer= model.embedding). But this code reports an error (see below) for no rule defined in embedding layer in calling _check_and_attach_rules(). My understanding is that the LRP is doing back propagation until the output of the embedding layer so that there is no need to have a rule for the embedding layer as we are not distributing the attribution scores further to the inputs of the embedding layers. Is this a bug or I misunderstand somewhere?
File /commons/envs/vnlp-interpretability/lib/python3.8/site-packages/captum/attr/_core/lrp.py:294, in LRP._check_and_attach_rules(self)
292 layer.rule = None # type: ignore
293 else:
--> 294 raise TypeError(
295 (
296 f"Module of type {type(layer)} has no rule defined and no"
297 "default rule exists for this module type. Please, set a rule"
298 "explicitly for this module and assure that it is appropriate"
299 "for this type of layer."
300 )
301 )
TypeError: Module of type <class 'torch.nn.modules.sparse.Embedding'> has no rule defined and nodefault rule exists for this module type. Please, set a ruleexplicitly for this module and assure that it is appropriatefor this type of layer.
It looks like the LRP/LayerLRP and DeepLift/Layer DeepLift are taking a pytorch model as the input while other methods are taking the forward function. Will LRP and DeepLift also support forward function as an input in the near future as a plan or it is not feasible due to some technical difficulty?
I would really appreciate if you may answer my questions and help me out! Thank you!
Layer LRP will still add hooks to each module in the model (even if this appears prior to the target layer). The reason for this is that hooks need to be added prior to the model's forward / backward pass, and we cannot detect which modules will be executed prior to or after the target module. In this case, if it is feasible, you could split your module into two - one containing the portion prior to the embedding and the other containing the portion after the embedding and should then be able to apply LRP to the second part.
LRP and DeepLift do require access to the module since we need to add forward and backward hooks to access and override each module's gradients during the backward pass. To have this access, we need the module itself. For other methods, we only need access to the input and output, so we can support any forward function rather than the module.
Hope this helps! Please let us know if you have further questions.
Dear developers,
TypeError Traceback (most recent call last) File:16
File /commons/envs/vnlp-interpretability/lib/python3.8/site-packages/captum/attr/_core/layer/layer_lrp.py:215, in LayerLRP.attribute(self, inputs, target, additional_forward_args, return_convergence_delta, attribute_to_layer_input, verbose) 213 self.layers = [] 214 self._get_layers(self.model) --> 215 self._check_and_attach_rules() 216 self.attribute_to_layer_input = attribute_to_layer_input 217 self.backward_handles = []
File /commons/envs/vnlp-interpretability/lib/python3.8/site-packages/captum/attr/_core/lrp.py:294, in LRP._check_and_attach_rules(self) 292 layer.rule = None # type: ignore 293 else: --> 294 raise TypeError( 295 ( 296 f"Module of type {type(layer)} has no rule defined and no" 297 "default rule exists for this module type. Please, set a rule" 298 "explicitly for this module and assure that it is appropriate" 299 "for this type of layer." 300 ) 301 )
TypeError: Module of type <class 'torch.nn.modules.sparse.Embedding'> has no rule defined and nodefault rule exists for this module type. Please, set a ruleexplicitly for this module and assure that it is appropriatefor this type of layer.
I would really appreciate if you may answer my questions and help me out! Thank you!