Open scott97 opened 7 months ago
I have been doing some research on how inaturalist determines whether something is "research grade". They use a 2/3 agreement to determine what an observation is and use a voting system by the community on whether it should be "research grade" or not.
The interesting thing about inaturalist is how they resolve conflicting observations.
From https://www.inaturalist.org/pages/help#identification:
iNat chooses the taxon with > 2/3 agreement, and if that's impossible, it walks up the taxonomic tree and chooses a taxon everyone agrees with, so if I say it's Canis and you say it's Canis familiaris, 2/2 identifications agree it's in Canis but only 1/2 think it's Canis familiaris so iNat goes with Canis.
Users who make observations should have some kind of rank that shows their experience. Users would be automatically granted an
expert
rank by having many annotations that are verified. Admin users would also be able to manually grant/revoke this rank.Along with this, annotations need to be able to be verified by users. We need to figure out how best to measure this. One option could be a threshold of the total number of people who verified it & a threshold on the percent in agreement. For example: must be > 5 verifications and must be > 90 % agreement. It would be good to store a list of the users who accept and reject an annotation, so we can change the criteria at a later date. I am not sure how experts come into this calculation.
The annotations API should have additional parameters to filter based on a criteria. A suggestion: