Closed rjanvier closed 7 months ago
Hi @rjanvier, thanks for reaching out :blush:
The way I see it, there is no need for an aggregation scheme. The trick would be simply to compute the same handcrafted features we already do. Except that we could use different subsets of the provided neighbors.
Assuming the neighbors are passed in order of increasing distance, we can easily compute the features for a series of increasing scales. These could either be defined as K-NN scales (eg $K \in (10, 25, 50, ... ) $) or radius scales (eg $R \in ( 0.5, 0.1, 0.2, ... ) $).
Do you see what I mean ? Were you thinking of anything different ?
Hi Damien, thank you for your quick answer.
Yes, I think I follow you and I already have this "trick" in mind since the input requirements of pgeof easily allow this behavior.
My question is more about the output format / downstream usage of the multiscale features. If I follow your last post, we will output a feature map for each K / R selected by user so there is no aggregation. Usage of multiple scales or a putative aggregation is left to the responsability of the "caller" of pgeof am I right?
Absolutely ! I am thinking that it is up to the user to decide what to do with the resulting features. This should not bring too much change to the code base, only need to take a list of K / R as input.
Thanks a lot for looking into this !
Hi Damien, I have a little window to try to implement multi scale feature computation we talked about (see #4). I would like to know how do you see the thing? would you want to rely on an aggregation scheme (I don't think a mean makes sense for all features) or output the whole multi scale feature map? Thank in advance, Romain