The proposed change is to the function called FerNNClassifier::getFeatures().
Originally, the function was :
for (int t=0;t<nstructs;t++){
leaf=0;
for (int f=0; f<structSize; f++){
leaf = (leaf << 1) + features[scale_idx]t*nstructs+f;
}
fern[t]=leaf;
}
With nstructs = 10 and structSize = 13.
The issue comes from the indexes in the features function where leaf is updated. They are in reverse order. This means that not all features from 0 to 129 are used, instead only features from 0 to t_nstructs+f = 9_10 + 13 = 103 are used. We still have 130 bits in the leaf variable because some are used several times. So the algorithm relies on 104 features instead of the intended 130 which decreases the performance of it.
This correction increases the tracking performance significantly without having any impact on required processing power since the extra features we are now using are already calculated, just not used until now.
The proposed change is to the function called FerNNClassifier::getFeatures(). Originally, the function was :
for (int t=0;t<nstructs;t++){ leaf=0; for (int f=0; f<structSize; f++){ leaf = (leaf << 1) + features[scale_idx]t*nstructs+f; } fern[t]=leaf; }
With nstructs = 10 and structSize = 13.
The issue comes from the indexes in the features function where leaf is updated. They are in reverse order. This means that not all features from 0 to 129 are used, instead only features from 0 to t_nstructs+f = 9_10 + 13 = 103 are used. We still have 130 bits in the leaf variable because some are used several times. So the algorithm relies on 104 features instead of the intended 130 which decreases the performance of it. This correction increases the tracking performance significantly without having any impact on required processing power since the extra features we are now using are already calculated, just not used until now.
I hope I was able to improve the project.
Sincerely, Alex