hans66hsu / nn_interpretability

Pytorch implementation of various neural network interpretability methods
MIT License
109 stars 20 forks source link

Fixed bug of guided backpropagation #4

Closed miquelmn closed 2 years ago

miquelmn commented 2 years ago

GuidedBackprop, defined in guided_backprop.py had an error for models build with layers not contained into a sequence wrapper.

Fixed adding a list in the case that the layer is not iterable.

sequential_modules = [sequential_modules]

hans66hsu commented 2 years ago

Thanks for the pull. We initially assumed that all layers are in containers (e.g., torch.nn.Sequential and torch.nn.ModuleList) and iterable in order to align with the model in PyTorch model zoo, but this is a good idea.