To measure sensitivity we would want to hold all input weights constant except for one that is varied in order to see how that weight impacts the rank ordering of results.
Set all weights to 3 and vary test weight from 1 to 15 (one-third to five times the size)?
For each test create a table centered at equal weights (3 for all)?
Then measure perturbations in both directions as the % change to the top 50 match set.
Create a table with columns being test weights 1-15, and rows being the 7 test cases (subsector reference points).
This is a little rudimentary because the impact of input weight a1 might depend on the current weight of a2, for example. See "Visualize distance gradients" for ideas about a more thorough approach.
Goal is to see which distance metric has the largest impact on the total distance score when input weights are changed.
Input weights are the weights on a single distance component:
Where a[i] represents the input weight, and b[i] represents the component parts of the mission distance.
Which is different than weight given to distance components:
To measure sensitivity we would want to hold all input weights constant except for one that is varied in order to see how that weight impacts the rank ordering of results.
Set all weights to 3 and vary test weight from 1 to 15 (one-third to five times the size)?
For each test create a table centered at equal weights (3 for all)?
Then measure perturbations in both directions as the % change to the top 50 match set.
Create a table with columns being test weights 1-15, and rows being the 7 test cases (subsector reference points).