Closed asalimih closed 4 years ago
Hi Adel,
Yes, you can handle multiple outputs by wrapping the model inside of a function. Inside this function you can return the value you desire to perform attributions on, e.g.
def forward_func(*x):
# return the first element in the tuple
return my_model(*x)[0]
ig = IntegratedGradients(forward_func)
@miguelmartin75 Thank you very much
Hello again, its seems wrapping the model inside a function doesn't work for some other methods. it works for vanilla salinecy but not for GuidedBackprop and GuidedGradCam. My code is :
def model_inter(*x):
return model(*x)[0]
inter_emb_d = configure_interpretable_embedding_layer(model, 'embedding_d')
inter_emb_t = configure_interpretable_embedding_layer(model, 'embedding_t')
input_emb_d = inter_emb_d.indices_to_embeddings(test_tp_dr)
input_emb_t = inter_emb_t.indices_to_embeddings(test_tp_pr)
saliency_attr = Saliency(model_inter).attribute((input_emb_d, input_emb_t))
guidedBackProp_attr = GuidedBackprop(model_inter).attribute((input_emb_d, input_emb_t))
guidedGradCam_attr = GuidedGradCam(model_inter, model.conv_d_3).attribute((input_emb_d, input_emb_t))
remove_interpretable_embedding_layer(model, inter_emb_d)
remove_interpretable_embedding_layer(model, inter_emb_t)
GuidedBackprop and GuidedGradCam raise following error:
/usr/local/lib/python3.6/dist-packages/captum/attr/_core/guided_backprop_deconvnet.py in __init__(self, model, use_relu_grad_output)
27 self.use_relu_grad_output = use_relu_grad_output
28 assert isinstance(self.model, torch.nn.Module), (
---> 29 "Given model must be an instance of torch.nn.Module to properly hook"
30 " ReLU layers."
31 )
AssertionError: Given model must be an instance of torch.nn.Module to properly hook ReLU layers
Also I know that the code I've written for GuidedGradCam is not correct because I specified the model's convolution layer via model.conv_d_3
and gave the model as a function which I think may cause problem.
Hi @asalimih , for GuidedBackprop and GuidedGradCAM, a small workaround is needed since the methods need access to the original model to appropriately hook ReLU layers. Something like this should work for GuidedBackprop:
def model_wrapper(*x):
return model(*x)[0]
gb = GuidedBackprop(model)
gb.forward_func = model_wrapper
gb.attribute(inputs)
For Guided GradCAM, since it combines two methods GradCAM and GuidedBackprop, the forward_func for both should be replaced appropriately:
ggc = GuidedGradCam(model)
ggc.grad_cam.forward_func = model_wrapper
ggc.guided_backprop.forward_func = model_wrapper
ggc.attribute(inputs)
@@asalimih, can we close the issue ? Do you have more questions ?
Hi, I have a network which outputs a tuple containing two elements. one of them is a scalar which is the one I want to use for multiple methods in captum. Is there any way to handle multiple outputs in captum or I should change my models architecture for that!