utkuozbulak / pytorch-cnn-visualizations

Pytorch implementation of convolutional neural network visualization techniques
MIT License
7.78k stars 1.49k forks source link

register_backward_hook #51

Closed maryjis closed 4 years ago

maryjis commented 4 years ago

Good day! I use fine-tuning loaded model ResnetXt-101 from .pth file. GradCam works good with my model, but when I try to use GuidedBackprop I get the error:

-> 80 gradients_as_arr = self.gradients.data.numpy()[0] 81 return gradients_as_arr 82

AttributeError: 'NoneType' object has no attribute 'data'

The error occurs due to hook_function is not called.

def hook_layers(self):
    def hook_function(module, grad_in, grad_out):
        print("Vaxx")
        self.gradients = grad_in[0]

Could you help me understand why it does not call?

utkuozbulak commented 4 years ago

It probably does not register hooks in the first place. Check the next two lines where it registers the hooks on the layers.

first_layer = list(self.model.features._modules.items())[0][1]
first_layer.register_backward_hook(hook_function)
maryjis commented 4 years ago

I think they should be registered:

class GuidedBackprop():
    def __init__(self, model):
        self.model = model
        self.gradients = None
        self.forward_relu_outputs = []
        # Put model in evaluation mode
        self.model.eval()
        self.update_relus()
        self.hook_layers()

    def hook_layers(self):
        def hook_function(module, grad_in, grad_out):
            print("Vaxx")
            self.gradients = grad_in[0]
        # Register hook to the first layer
        first_layer = list(self.model.features._modules.items())[0][1]
        first_layer.register_backward_hook(hook_function)

    def generate_gradients(self, input_image, target_class):
        # Forward pass
        model_output = self.model(input_image)
        # Zero gradients
        self.model.zero_grad()
        # Target for backprop
        one_hot_output = torch.FloatTensor(1, model_output.size()[-1]).zero_()
        one_hot_output[0][target_class] = 1
        # Backward pass
        print(one_hot_output)
        model_output.backward(gradient=one_hot_output)
        # Convert Pytorch variable to numpy array
        # [0] to get rid of the first channel (1,3,224,224)
        gradients_as_arr = self.gradients.data.numpy()[0]
        return gradients_as_arr

There is my function where I use GuidedBackprop.

    def guided_grad_cam(self):
            batch = next(iter(self.test_loader))
            img_paths, img_tensors, labels = batch
            img_path = img_paths[0]
            print(img_tensors.shape)
            prep_img = img_tensors[0].unsqueeze(0)

            # Grad cam
            gcv2 = GradCam(self.model, target_layer=7)
            # Generate cam mask
            cam = gcv2.generate_cam(prep_img,labels[0].data.numpy())
            print('Grad cam completed')

            # Guided backprop
            GBP = GuidedBackprop(self.model)
            # Get gradients
            print(labels[0].data.numpy())
            guided_grads = GBP.generate_gradients(prep_img,labels[0].data.numpy())
            print('Guided backpropagation completed')

            # Guided Grad cam
            cam_gb = guided_grad_cam(cam, guided_grads)

            file_name_to_export = os.path.join("cnn_visual", self.name)

            save_gradient_images(cam_gb, file_name_to_export + '_GGrad_Cam')
            grayscale_cam_gb = convert_to_grayscale(cam_gb)
            save_gradient_images(grayscale_cam_gb, file_name_to_export + '_GGrad_Cam_gray')
maryjis commented 4 years ago

I have decided this problem by unfreezing all layers in model.

for ind, param in enumerate(self.resnet.named_parameters()):
            param[1].requires_grad = True
utkuozbulak commented 4 years ago

If requires_grad is False then you can't register a backward hook (because there is no gradient). Glad you were able to solve it.