greentfrapp / lucent

Lucid library adapted for PyTorch
Apache License 2.0
597 stars 89 forks source link

Visuals losing contrast over time #18

Closed gergopool closed 3 years ago

gergopool commented 3 years ago

Hi,

When I was visualizing more neurons, I've noticed that after a while I lost the contrast in some images. I've made some research and found this true on 3/3 neurons I tried.

Steps to reproduce

img = render.render_vis(imagenet, "mixed4a:476", show_image=False)[0][0]
print(img.mean(), img.std())
plt.imshow(img)

This works fine. If you run it a couple of times, the std value will remain stable (0.18ish). Now, provide custom preprocess & transformation, but provide the same as the default.

transforms = transform.standard_transforms
transforms.append(lambda x: x * 255 - 117)
img = render.render_vis(imagenet, "mixed4a:476", preprocess=False, transforms=transforms, show_image=False)[0][0]
print(img.mean(), img.std())
plt.imshow(img)

This should also work okay. But after running this, run either of these blocks and you will experience a significant drop in contrast (0.16), which you cannot undo.

Do you know what the problem might be? If not, could you please have a look? I've tried to find the cause of this behaviour in the code, but every variable seems to be local, so I don't know what the problem is.

gergopool commented 3 years ago

Okay, I found it. I needed to change

transforms = transform.standard_transforms

to

transforms = transform.standard_transforms.copy()

because the first line was a reference to the list of transforms. I considered standard_transforms as a class property in my mind, that's where I went wrong.