Open adswa opened 5 years ago
yes, for any (L1, L2)
the coefficient in case of a linear classifier is just a decision boundary coefficient for classification where L1 is assigned "-1" and L2 is "+1" label. So positive value would say that to go from L1 to L2 the feature value would be increasing, and the negative would be saying that to go from L1 to L2 would be decreasing. So swapping the direction, negative value would say that to go from L2 to L1 would be increasing ;) So for (FFA, PPA)
, negative values would indeed correspond to higher values for FFA than for PPA.
just following up on a short clarification/conversation on this PR for future reference should I forget I already thought about it, ulabels will generally be sorted, bc during training, ulabels are assigned as
self.ulabels = ulabels = targets_sa.unique
where theunique
method usesnp.unique
internally, and that seems to sort what it returns. So ulabels are sorted and order is lexicographically contained as:so, I believe in the FFA-PPA case, negative weights are suggesting greater FFA activation. I'll assert this assumption in the code, and then we can be sure to know in which ROIs favor to interpret estimate signs.