Currently we have some quality control process for the validation task, which involves adding some verification audio clips in the form for the one we know the expected answer (Present vs Not Present).
Recently we added a (private) page for monitoring user activities, including their contributions and failed quality control test. However, it does not allow to know which verification clip was not answered as expected.
The votes to the verification audio clips are not saved into the database, and we only keep the information of if the test was passed in the form page. It would be useful to save also the votes on the verification clip, in order to monitor them more easily. It would allow to directly see which verification example is often making people to fail the test, and therefore be able to change them if they seem wrong.
The positive and negative verification clips are saved as a ManyToManyField to Sound in the TaxonomyNode model.
For the positive ones, they also need to be an instance of CandidateAnnotation (see _contribute_validate_annotationscategory datasets view function), which allows to store the votes the same way we do for the normal verified clips.
For the negative ones, some fake annotations are created (with id 0), which does not allow to store the votes.
However, monitoring the negative verification clips is maybe not very important, and we could focus on the positive ones, which still need to be improved.
I propose to:
Create a TestVote model for storing only the votes corresponding to the test verification votes, and not mix them with the others (this would allow to avoid the refactoring of some part of the code to omit the votes corresponding to tests).
Creates the votes corresponding to the positive verification clips in TestVote.
Add in the monitor categories page, a table with the failed test votes.
Currently we have some quality control process for the validation task, which involves adding some verification audio clips in the form for the one we know the expected answer (Present vs Not Present).
Recently we added a (private) page for monitoring user activities, including their contributions and failed quality control test. However, it does not allow to know which verification clip was not answered as expected.
The votes to the verification audio clips are not saved into the database, and we only keep the information of if the test was passed in the form page. It would be useful to save also the votes on the verification clip, in order to monitor them more easily. It would allow to directly see which verification example is often making people to fail the test, and therefore be able to change them if they seem wrong.
The positive and negative verification clips are saved as a
ManyToManyField
toSound
in theTaxonomyNode
model. For the positive ones, they also need to be an instance ofCandidateAnnotation
(see _contribute_validate_annotationscategory datasets view function), which allows to store the votes the same way we do for the normal verified clips. For the negative ones, some fake annotations are created (with id 0), which does not allow to store the votes. However, monitoring the negative verification clips is maybe not very important, and we could focus on the positive ones, which still need to be improved.I propose to:
TestVote
model for storing only the votes corresponding to the test verification votes, and not mix them with the others (this would allow to avoid the refactoring of some part of the code to omit the votes corresponding to tests).