Open rasmusmarak opened 1 year ago
Note: The tests are currently failing because of new weighting. I will update these tests once we have decided on the final approach for weighting.
Question: In the expression for S, we divide by the total number of points on the tensor, N. However, I am unsure of what it contributes to the score. For instance, since the weights are normalized, the sum in the scoring will be at most 1. Thus, if we assume N=4000 points or similar, this score will instead result in an uneven comparison with the penalties defined on [0,1] (except for the far distance penalty, which can become larger than one). So the question is, do we still need to divide by number of cells even when the weights are already normalized?
I have now removed the division by N, added the weights as a fixed parameter and updated the tests.
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Description
Updating weighting on coverage according to issue #45, from which we use the following scheme: S = sum(B_i * w_i_normalized) / N, where B_i is a boolean value dependent on whether the point on the tensor has been visited or not. The weights are w_i = 1/r_i and then normalized by w_i / sum(w), and N is the total number of defined point on the tensor.
Summary of changes
Resolved Issues
How Has This Been Tested?