xxxnell / how-do-vits-work

(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
https://arxiv.org/abs/2202.06709
Apache License 2.0
806 stars 79 forks source link

Plot for Relative log amplitudes of Fourier transformed feature maps #4

Closed xingchenzhao closed 2 years ago

xingchenzhao commented 2 years ago

Hi, thank you for the great paper. Could you please release the code or give implementation example of plotting "Relative log amplitudes of Fourier transformed feature maps". Thanks!

xxxnell commented 2 years ago

Hi, thank you for your support!

The code for the Fourier analysis is messy (at this moment) so I did not release it yet. I will release the code after re-implementing it.

The snippet below is a pseudo-code for the Fourier analysis. I hope this helps you.

import math
import matplotlib.pyplot as plt

def fourier(x):  # 2D Fourier transform
    f = torch.fft.fft2(x)
    f = f.abs() + 1e-6
    f = f.log()
    return f

def shift(x):
    b, c, h, w = x.shape
    return torch.roll(x, shifts=(int(h/2), int(w/2)), dims=(2, 3))

fig, ax = plt.subplots(1, 1, figsize=(3.3, 4))
for latent in latents:  # `latents` is a list of hidden feature maps in latent spaces
    if len(latent.shape) == 3:  # For ViT
        b, n, c = latent.shape
        h, w = int(math.sqrt(n)), int(math.sqrt(n))
        latent = latent.permute(0, 2, 1).reshape(b, c, h, w)
    elif len(latent.shape) == 4:  # For CNN
        b, c, h, w = latent.shape
    else:
        raise Exception("shape: %s" % str(latent.shape))
    latent = fourier(latent)
    latent = shift(latent).mean(dim=(0, 1))
    latent = latent.diag()[int(h/2):]  # Only use the half-diagonal components
    latent = latent - latent[0]  # Visualize 'relative' log amplitudes

    # Plot Fourier transformed relative log amplitudes
    freq = np.linspace(0, 1, len(latent))
    ax.plot(freq, latent)
dinhanhx commented 2 years ago

@xxxnell when will you probably release the code for all Fourier stuff? (you don't need to answer exact days or dates. You can say next week approximate or sth like that.)

Also how do I find latents of a model?

xxxnell commented 2 years ago

Hi @dinhanhx

latents is hidden states (latent feature maps). If you're using timm, you can get latents by using the snippet below:

import copy
import timm
import torch
import torch.nn as nn

# Divide the pretrained timm model into blocks.
name = 'vit_tiny_patch16_224'
model = timm.create_model(name, pretrained=True)

class PatchEmbed(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = copy.deepcopy(model)

    def forward(self, x, **kwargs):
        x = self.model.patch_embed(x)
        cls_token = self.model.cls_token.expand(x.shape[0], -1, -1)
        x = torch.cat((cls_token, x), dim=1)
        x = self.model.pos_drop(x + self.model.pos_embed)
        return x

class Residual(nn.Module):
    def __init__(self, *fn):
        super().__init__()
        self.fn = nn.Sequential(*fn)

    def forward(self, x, **kwargs):
        return self.fn(x, **kwargs) + x

class Lambda(nn.Module):
    def __init__(self, fn):
        super().__init__()
        self.fn = fn

    def forward(self, x):
        return self.fn(x)

def flatten(xs_list):
    return [x for xs in xs_list for x in xs]

# `blocks` is a sequence of blocks
blocks = [
    PatchEmbed(model),
    *flatten([[Residual(b.norm1, b.attn), Residual(b.norm2, b.mlp)] 
              for b in model.blocks]),
    nn.Sequential(Lambda(lambda x: x[:, 0]), model.norm, model.head),
]
# This snippet below build off https://github.com/facebookresearch/mae
import requests
import torch
import numpy as np

from PIL import Image
from einops import rearrange, reduce, repeat

imagenet_mean = np.array([0.485, 0.456, 0.406])
imagenet_std = np.array([0.229, 0.224, 0.225])

# Load an image
img_url = 'https://user-images.githubusercontent.com/11435359/147738734-196fd92f-9260-48d5-ba7e-bf103d29364d.jpg'
xs = Image.open(requests.get(img_url, stream=True).raw)
xs = xs.resize((224, 224))
xs = np.array(xs) / 255.

assert xs.shape == (224, 224, 3)

# Normalize by ImageNet mean and std
xs = xs - imagenet_mean
xs = xs / imagenet_std
xs = rearrange(torch.tensor(xs, dtype=torch.float32), 'h w c -> 1 c h w')

# Accumulate `latents` by collecting hidden states of a model
latents = []
with torch.no_grad():
    for block in blocks:
        xs = block(xs)
        latents.append(xs)

latents = [latent[:,1:] for latent in latents]  # Drop CLS token
latents = latents[:-1]  # Drop logit

I can't give a definite timeline, but I’ll try hard to release the whole code for Fourier analysis by next Friday!

dinhanhx commented 2 years ago

Alright thanks for the work

xxxnell commented 2 years ago

I have just released the code for Fourier analysis! Please refer to the fourier_analysis.ipynb notebook. The code also can run on Colab (no GPU is needed).

Please feel free to reopen this issue if the problem still exists.