Open fabriziojpiva opened 3 years ago
The network has two outputs which are feat
and out
, note that feat
and out
have the same shape. The process is as follows:
argmax
in out
.feat
in pixel-level by pseudo labels, and then perfome F.adaptive_avg_pool2d
in selected feat
to get image-level features of each class.2\. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected `feat` to get image-level features of each class.
Why is it needed to perform adaptive average pooling? To my understanding, if I were to plot features I would do the following:
argmax
in out
. The resulting tensor out_argmax
has a shape of [batch_size, h, w]
, which I flatten out into a unidimensional vector called class_ids
of size [N]
, where N=batch_size*h*w.
feat
to match the vector of class_ids
: from a feature tensor of shape [batch_size, depth, h, w]
to a new shape [N, depth]
. Let's call the resulting reshaped tensor feats_r
.class_ids
from 1) and feats_r
from 2) into a pandas dataframe. All the class ids and reshaped features are accumulated into a pandas dataframe df
with depth + 1
columns, where the first depth
columns are for the features and the last one for the class ids.df
, and plot the resulting embeddings using the class ids for the corresponding color of each point.The network has two outputs which are
feat
andout
, note thatfeat
andout
have the same shape. The process is as follows:1. Get pseudo labels by using `argmax` in `out`. 2. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected `feat` to get image-level features of each class.
I just tried this approach, storing all these vectors s
in a dataframe, and then reducing this dataframe to 2D representations using UMAP, but I obtained very dense clusters compared to the figures in the manuscript, where the point clouds look more sparse. Could you please provide more information about these feature representations:
Would be glad to hear from you. Thanks!
no reply, right?
Hello, thanks for such a good contribution in the field, it is really a groundbreaking work.
I was trying to reproduce the plot of the features that you have in Figure 5 of the main manuscript using UMAP. How did you determine which features belong to those specific classes (building, traffic sign, pole, and vegetation)? We can determine from the output to which class each pixel belongs to, but how did you do it in the feature space? Resizing the logits back to the feature space shape, then argmax to determine the correspondence?