pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.26k stars 6.96k forks source link

Inversion of prototype transforms #6062

Open pmeier opened 2 years ago

pmeier commented 2 years ago

When debugging vision models, it is often useful to be able to map predicted bounding boxes, segmentation masks, or keypoints back onto the original image. To do this conveniently, each transformation should know how to invert itself. A discussion about this can be found in this thread. While useful, it was deemed a lower priority than adding general support for non-image input types in the prototype transforms. However, from the preliminary discussions, inverting transformations seems not to conflict with the proposal and thus can be added later.

Apart from the thread linked above there were some discussions without written notes. They are listed here so they don’t get lost:

cc @vfdev-5 @datumbox @bjuncek @pmeier

vfdev-5 commented 2 years ago

Following the linked thread and Yuxin's comment, transform inversion makes a lot of sense for test-time augmentations (TTA) where we want to reduce prediction variance by combining multiple predictions produced by a single model on transformed input data:

output0 = model(input)
output1 = transform1.invert(model(transform1(input, params1)), params1)
output2 = transform2.invert(model(transform2(input, params2)), params2)
...
final_output = aggregate(output0, output1, output2, ...)

If we want to have a better effect of TTA, we may not want to use non-invertible transforms (like crop) as we wont be able to restore predictions in original space. IMO, we can at first provide inversion feature for invertible ops only.

JLrumberger commented 2 years ago

I have invertible transformations up and running for the following transformations: mirror, translate, zoom, scale, rotate, shear and elastic transformations and I'd be happy to contribute if you want :)

datumbox commented 2 years ago

@JLrumberger thanks, this is very interesting! We definitely want to consider this after finalizing the main API of the transforms. We want to avoid making it more complex right now, but if you are happy to wait we can kick this off once the prototype is complete. What you think? In the meantime, other contributions from you are very welcome!