yufengm / Adaptive

Pytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning
107 stars 42 forks source link

Encoder encodes the same image differently #8

Closed KeepingItClassy closed 6 years ago

KeepingItClassy commented 6 years ago

Hi,

I trained a fine-tuned model on my own custom dataset, and the results are generally quite accurate. However, with some images I've noticed that if I sample the same image multiple times I get different captions. The differences are usually small, but for my use case I need consistency. I tried to debug by printing out the encoder tensors and it looks like the model encodes the same image differently at different samplings. Is this expected behavior? Is there a way to "stabilize" the encoder so it encodes the same image the same way each time?

Thanks!

yufengm commented 6 years ago

There is randomness in the transform process during training. You can fix this by removing transforms.RandomCrop( args.crop_size ), transforms.RandomHorizontalFlip(),

But it would be good to keep this. As such data augmentation can improve generalizability. While during the testing, you can use CenterCrop to ensure the same result for the same image.

KeepingItClassy commented 6 years ago

Thank you for the quick reply! However, this is happening when I submit new images for captioning - I'm using a saved trained model. I wasn't using any random crop during training, but I did use random horizontal flip. My caption generation script doesn't apply any random transforms though, just resizing and normalization of the input image. Is the trained model doing the random flipping before it encodes the image?

Thanks again!

yufengm commented 6 years ago

Did you use my following function? https://github.com/yufengm/Adaptive/blob/4c0555af546cdbd49e99ff1bd6e91d1654ae0cd2/utils.py#L98 Or you write your own code for caption generation?

yufengm commented 6 years ago

The only randomness I can think of is the dropout/batchnorm that might be incorporated in the model. And during evaluation, you'll need to change mode to 'eval' mode!

KeepingItClassy commented 6 years ago

I wrote my own caption generation that captions one image at a time. I need the resized images to keep their aspect ratio so I wrote a custom resize_pad function using Pillow. I did already have model.eval() - without it the captions were completely wrong. Here's my caption function code.

def caption(image, model, vocab):

    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.485, 0.456, 0.406),
                             (0.229, 0.224, 0.225))])

    image = resize_pad(image)
    image = transform(image).unsqueeze(0)
    image_tensor = Variable(image, volatile=True)

    generated_captions, _, _ = model.sampler(image_tensor)
    captions = generated_captions.data.numpy()

    sampled_ids = captions[0]
    sampled_caption = []

    for word_id in sampled_ids:
        word = vocab.idx2word[word_id]
        if word == '<end>':
            break
        else:
            sampled_caption.append(word)

    sentence = ' '.join(sampled_caption)
    return sentence
sleighsoft commented 6 years ago

@KeepingItClassy How does the resize_pad function look? Do you mind sharing your code

KeepingItClassy commented 6 years ago

@sleighsoft, I'm using PIL:

from __future__ import division
from PIL import Image

def resize_pad(image):
    IMAGE_SIZE = 224
    bg_color = image.getpixel((0, 0))
    new_image = Image.new('RGB', (IMAGE_SIZE, IMAGE_SIZE), bg_color)
    old_width, old_height = image.size

    if old_width > old_height:
        ratio = old_width / old_height
        new_height = int(round(IMAGE_SIZE / ratio))
        position = int(round((IMAGE_SIZE - new_height)/2))
        resized_image = image.resize((IMAGE_SIZE, new_height), Image.HAMMING)
        new_image.paste(resized_image, (0, position))
    else:
        ratio = old_height / old_width
        new_width = int(round(IMAGE_SIZE / ratio))
        position = int(round((IMAGE_SIZE - new_width)/2))
        resized_image = image.resize((new_width, IMAGE_SIZE), Image.HAMMING)
        new_image.paste(resized_image, (position, 0))

    return new_image

I'm creating a background image with the target size and the color of upper left-hand corner pixel of the original image, but you can use any RGB tuple. Since the goal is to preserve the aspect ratio of the original image, I'm checking whether the height is greater than the width or vice versa, and resizing and positioning the image so that it's centered accordingly. Image.HAMMING is an optional sampling filter to make the resized images look smooth; you can find other filter options in the doc. Hope this helps!

KeepingItClassy commented 6 years ago

As for the original issue, I solved it by getting rid of volatile=True. Closing.

CherishineNi commented 5 years ago

@KeepingItClassy Could you share your test code? Code can be achieved on the screen images and captions. Thanks very much.

CherishineNi commented 5 years ago

@sleighsoft Could you share your test code? Code can be achieved on the screen images and captions. Thanks very much.