rsomani95 / shot-type-classifier

Detecting cinema shot types using a ResNet-50
Other
190 stars 39 forks source link

How to adapt the method for a stream? #1

Closed thoppe closed 5 years ago

thoppe commented 5 years ago

The examples work well for a fixed set of images, but I'm having trouble trying to adjust them for a stream. I've asked the general question on SO, but I was wondering if there was a cleaner way to do this with model you've got.

I'm not fully understanding why we have to create a data object in the first place -- especially with files in both "train" and "valid"

    data = ImageDataBunch.from_folder(
        path,
        "train",
        "valid",
        size=(375, 666),
        ds_tfms=get_tfms(),
        bs=1,
        resize_method=ResizeMethod.SQUISH,
        num_workers=0,
    ).normalize(imagenet_stats)

couldn't we just load them model, apply the preprocessing, and then output the result?

Thanks for the help, the model looks to be amazing!

rsomani95 commented 5 years ago

The data object needs to be created only if you want to generate heatmaps. It's a hacky way of doing it, but it's the only way I could get it to work, for now.

What's your objective? Do you want to

  1. Get the model's predictions?
  2. Generate heatmaps?

If it's just getting predictions, then you should be able to do that from a video stream without creating an ImageDataBunch. Take a look at the save_preds function in the get-preds.py file here.

Does that help?

This is a useful feature to add to the repo. I'll work on the code and push it soon.

Thanks for the help, the model looks to be amazing!

Thanks! Happy to help :)

thoppe commented 5 years ago

Thanks for the response. I only need 1], the model's predictions. I used get-preds.py as a template for loading my own images. It seems to call initialise.py which in turns creates a data = ImageDataBunch.from_folder which prompted my question. What I need is a way to load the "learner"

learn = cnn_learner(data, models.resnet50, metrics = [accuracy], pretrained=True)
learn = learn.to_fp16()
learn.load(path/'models'/'shot-type-classifier');

without having to either create a data element, or an empty one that only has the transforms. Right now, I'm dumping each image to a temporary file and then reading it back in with open_file! This applies the transforms, but it's a huge waste of IO as I've already got the image loaded as a numpy array.

While you're here, it might be nice to note in the documentation what image format you're using:

1] width by height or height by width? 2] RGB or BGR?

rsomani95 commented 5 years ago

So it turns out the method I've put in place for predictions is far from optimal. Reorganising the directory will take some time but meanwhile, I wrote some code that should help you.

First, download the .pkl model. I've included the link in the get_data_model.sh script. As mentioned here, this is the correct way to use a model for inference.

In my testing, this worked with arrays that were shaped (height, width, channels); the size of the array doesn't matter (images don't need to be (375, 666).

from fastai.vision import *

learn = load_learner('~/shot-type-classifier/models', file='shot-type-classifier.pkl')

## Predict from an image on disk
img = '~/test.jpg'
learn.predict(open_image(img))

## Predict from a numpy array
# arr.shape --> (height, width, 3)
img = PIL.Image.fromarray(arr).convert('RGB')
img = pil2tensor(arr, np.float32).div_(255) # Convert to torch.Tensor
img = Image(img) # Convert to fastai.vision.image.Image

learn.predict(img)[0] # --> Shot Type
learn.predict(img)[2] # --> Probabilities

Further optimising would be to convert it directly to a float.Tensor from a numpy.ndarray but in my brief testing, that gave some strange errors that I haven't gotten around yet.

thoppe commented 5 years ago

Thanks, your example helped a lot! I found you didn't need the line img = PIL.Image.fromarray(arr).convert('RGB').