Closed Abdulk084 closed 3 years ago
The coordinates for the features are stored in the internal attribute self._coords
. If you would like to get a pandas DataFrame from this you could use the code:
it = ImageTransformer(...)
it.fit_transform(...)
coords = pd.DataFrame(it._coords.T.copy(), columns=['y','x'])
Note that pixel location (0,0)
is the upper-leftmost pixel of the image.
I will add a self.coords()
method to extract a similar numpy array soon.
Added self.coords()
method in commit https://github.com/alok-ai-lab/DeepInsight/commit/94165d36548b687d143731c8120e5ec9a6ae34b5
So I used your suggestion and got the following pandas data-frame.
coords.head()
y x
0 22 56
1 42 43
2 35 106
3 40 96
4 54 93
This means that feature 0
is mapped to (56,22)
and feature 1
is mapped to (43,42)
and so on?
Yes, that is correct.
As when we have the data where each row represents a sample and each column represents feature. In my case, I have the data in numpy variable
x
with the following shape.(550, 39429)
Now when I apply
it
to it as follows, image data is formed usingt-sne
mat_train = it.transform(X)
mat_train
is a list of 550 images. Each is120x120
.Is there any function you built which can help me relate each of these pixels values generated in
mat_train
to the original features inx
? May be It can show me which pixel correspond to which feature (set) in the original data?