Open rickbeeloo opened 4 years ago
This rubric was, in-fact, created by Michel Dumontier, co-author of the FAIRMetrics paper whom we will reach out to about this.
From looking side-by-side at them myself, the metrics are basically in the same order with 2 discrepancies:
The metrics that directly correspond to fairmetrics have their metric identifier purl as a url. Perhaps this helps if you hope to programatically map these.
Thankyou @u8sand for the quick reply! Indeed I meant those 2 discrepancies. It's not that we need it for programatical access, but we want to ask others to fill in the matrices and would like to give more information about what they need to fill in excatly, and therefore would like to have the full descriptions from fairmetrics.org.
Interestingly enough, there is another version https://github.com/FAIRMetrics/Metrics/tree/master/MaturityIndicators -- unfortunately Gen 2
seems to have 15 metrics now.. Personally, I would go by the 16 metrics on fairshake given that there were already over 1,000 assessments using that rubric; as such your evaluations would be comparable to that rubric and furthermore, more can always be consolidated to less, not the other way around.
When you make an evaluation, a description is provided beyond the title of the metric, but if you want to have the exact copy of that document, you may very well construct a new rubric which matches those descriptions.
We want to assess the FAIRness of our dataset according to the fairmetrics.org However when filling in the form via FAIRshake there are 16 metrices whereas there are only 14 in the documentation of fairmetrics.org.
What causes this difference and how can we understand what description from fairmetrics.org corresponds to what question on FAIRshake