alok-ai-lab / pyDeepInsight

A python implementation of the DeepInsight methodology.
GNU General Public License v3.0
158 stars 47 forks source link

Extracting which pixels in the image corrospond to which features #2

Closed Abdulk084 closed 3 years ago

Abdulk084 commented 3 years ago

As when we have the data where each row represents a sample and each column represents feature. In my case, I have the data in numpy variable x with the following shape.

(550, 39429)

Now when I apply itto it as follows, image data is formed using t-sne mat_train = it.transform(X)

mat_train is a list of 550 images. Each is 120x120.

Is there any function you built which can help me relate each of these pixels values generated in mat_train to the original features in x? May be It can show me which pixel correspond to which feature (set) in the original data?

kaboroevich commented 3 years ago

The coordinates for the features are stored in the internal attribute self._coords. If you would like to get a pandas DataFrame from this you could use the code:

it = ImageTransformer(...)
it.fit_transform(...)
coords = pd.DataFrame(it._coords.T.copy(), columns=['y','x'])

Note that pixel location (0,0) is the upper-leftmost pixel of the image.

I will add a self.coords() method to extract a similar numpy array soon.

kaboroevich commented 3 years ago

Added self.coords() method in commit https://github.com/alok-ai-lab/DeepInsight/commit/94165d36548b687d143731c8120e5ec9a6ae34b5

Abdulk084 commented 3 years ago

So I used your suggestion and got the following pandas data-frame. coords.head()

    y   x
0   22  56
1   42  43
2   35  106
3   40  96
4   54  93

This means that feature 0is mapped to (56,22) and feature 1is mapped to (43,42) and so on?

kaboroevich commented 3 years ago

Yes, that is correct.