alok-ai-lab / pyDeepInsight

A python implementation of the DeepInsight methodology.
GNU General Public License v3.0
158 stars 47 forks source link

inverse_transform? #26

Closed Backflipz closed 2 years ago

Backflipz commented 2 years ago

Apologize if this is a dumb question or not fully thought out, but I was wondering if it is feasible to provide an inverse_transform function? I ask because the idea is to use some current generative architectures to produce new images and then re-translate it back to its original form to observe and contextualize what transforms took place. Is this possible or would it not make sense due to the inherent data loss from integer scaling?

kaboroevich commented 2 years ago

Implementing an inverse_transform method shouldn't be an issue. We think it's a good idea and will move forward with adding it.

Whether or not the loss you observe due to integer scaling or due to overlap of the features in the pixel space is significant will depend on the specific application. It's something that needs to be kept in mind, but I don't think it invalidates the methods general usefulness.

kaboroevich commented 2 years ago

I added the requested method. It will take an image array or batch of image arrays and return the original feature space. Features that were mapped to the same pixel will receive the same value so it's preferable to have a single feature per pixel. One approach would be to use UMAP as the feature extractor and set discretization='assignment' to reduce the amount of overlap.

Do note that depending on the output of the feature extractor, the scipy.optimize.linear_sum_assignment use by discretization='assignment' can take a very long time.