Closed JoJudge closed 4 years ago
Yes, it's not great.
We need a processed coordinateUncertainty (i.e. a controlled vocabulary) for it. We keep the original uncertainty, in case we need it, but use the processed one in the filter and dashboard etc.
What should the groupings be? 1, 10, 100, 1,000, 2000, 5000, 10000, 50000 and 100000m? and where would you like the cut to be, i.e. 1 is <= 1 10 is >1 and <=10 etc.
I think your groupings make sense. I think it should be as you have it 1 is <= 1 10 is >1 and <=10 etc.
We have two options here: (a) change the processing to take the larger coordinate uncertainty for a record that is then blurred, instead of the sum. So if a record had 100m uncertainty and then was blurred to 1km, it would have 1km coordinate uncertainty instead of 1100m. This should give us the type of categories that are described above, though they would tend to be lower bound instead of upper bound, and conceptually they are not quite correct. (b) add a new indexed field coordinate_uncertainty_category with rules as above. This might be the best approach. We'd need to define where this is shown, downloaded, etc.
I prefer b).
I can define where it should be displayed etc.
if we go for b), on the records filter the new indexed field (coordinate_uncertainty_category) should replace current coordinate_uncertainty filter, which is pretty much unusable.
The only page that the new field could be added to is on the occurrence record page, below the existing coordinate_uncertainty value. I don't think that it's needed in any of the downloads.
This is done and can be closed.
There are now hundreds of possible coordinate uncertainties up to just over 2.5m when you filter, Could these be grouped into e.g. 10m, 100m, 1km, 2km, 3km?