oeg-upm / FAIR-Research-Object

Repository for the work on evaluating FAIRnes of Research Objects
Apache License 2.0
3 stars 0 forks source link

Discrepancy in overal score attribute #21

Open esgg opened 7 months ago

esgg commented 7 months ago

This RO: https://w3id.org/ro-id/3125b7be-03f9-447e-806f-20beb66f7949 passess all tests in its elements but returns an overall-score of 67%.

esgg commented 7 months ago

The issue detected is due to:

1) Research Object assessment is calculated based on the general tests passed/total. There is one general test for each subprinciple (F1.1, F1.2, etc.). Also, there is a collection of mini tests which assess the fairness of each this subprinciple. These mini tests are included in the explanation attribute. Based on the score of these minitests, you can pass/fail the general test. It may be the case that you pass the general test, but not all the mini-tests. Thus, your score will be minor than the total score.

2) On the other hand, overall score shows the metric (score/total_score) of all RO components.

It is not a bug per se.

We will add a new metric based on the general tests of the research object (and its components). The new metric will have the following structure:

"overall_general_score": { "description": "Formula used: passed tests / total tests (in all components)", "score": XX.XX }