greentfrapp / lucent

Lucid library adapted for PyTorch
Apache License 2.0
597 stars 89 forks source link

Using Lucent with smaller images (CIFAR-100) #10

Open Randophilus opened 4 years ago

Randophilus commented 4 years ago

I am currently trying to use Lucent with a VGG model that I have trained on CIFAR-100 (32x32x3 images). I modified the network by removing the global average pool and replacing the linear layers with a single 512>100 linear layer before training from scratch. I was previously using ONNX to transfer my trained models to Tensorflow and visualizing using Lucid. Unfortunately, the newer version of pytorch are no longer compatible on Tensorflow 1.X and Lucid is not built for Tensorflow 2.X. I found your library recently and was going through the example code and getting great results. When I tried to use Lucent on my trained models while also using fixed_image_size=32, the visualizations are more blurry, not as colorful, and not as semantic.

Here is an example of a network visualization on Lucid (all 512 filters of the last layer of vgg11_bn) image

and here is an example of a network visualization (same network) on Lucent image

both images use the parameters: param_f = lambda: param.image(32, fft=True, decorrelate=True, batch=1) where in the Lucent library, I also use fixed_image_size=32

I already looked into transform.py in both Lucid and Lucent and they both seem to have identical standard_transforms. I also did some tests in jupyter notebook where I toggled decorrelate, fft, and transforms separately, and none of them seem to affect the visualization quality.

No transforms, no FFT, no decorrelate: image

standard transforms, no FFT, no decorrelate: image

standard transforms, FFT, no decorrelate: image

standard transforms, no FFT, decorrelate: image

no transforms, FFT, decorrelate: image

greentfrapp commented 4 years ago

Thanks for using Lucent! I'll look into this, but have you tried not using fixed_image_size=32?

greentfrapp commented 4 years ago

Also, @Randophilus, is there somewhere I can access the model you are using?

Randophilus commented 4 years ago

@greentfrapp Thank you for taking your time to make the Lucent library and for responding to my inquiry. When I try to not use fixed_image_size=32, this is my result:

image

there is a very distinct pattern now, but the resulting the colors are still dull and the pattern is fairly different from what I got from Lucid.

As for the model, Here is a state dict: https://drive.google.com/file/d/1pNQqqPuRdRGJLp2ZCYhy-CSg8oT3sdFy/view?usp=sharing

to use the state dict, you need to run:

import torch
from torchvision import models
import torch.nn as nn
import types

def forward_vgg_new(self, x):
    x = self.features(x)
    x = x.view(x.size(0), -1)
    x = self.classifier(x)
    return x

net = models.vgg11_bn(pretrained = False)
net.classifier = nn.Linear(512, 100)

net.forward = types.MethodType(forward_vgg_new, net) #changed forward function to ignore average pool
checkpoint = torch.load(PATH_TO_STATE_DICT)
net.load_state_dict(checkpoint['model_state_dict'])

net.cuda().eval()

the specific feature I was trying to render in the above post is 'features_25:3'

Randophilus commented 4 years ago

I changed fixed_image_size to 64 and the visualization looks a bit better: image changing fixed_image_size to 128 makes the visualizations look more like when I remove fixed_image_size image Also, when fixed_image_size is 64, toggling FFT and decorrelate still does not seem to affect the end visualization quality.