Open sjDCC opened 6 years ago
Just pointing out that there is http://fairmetrics.org/ / https://github.com/FAIRMetrics/Metrics
There might be more efforts like this one but we should at least be aware of them here.
As a general remark, I would think that robust metrics of FAIRness would require a more explicit definition of a FAIR Data Object - rather than build upon implicit interpretations of the FAIR principles.
F1000 position: As @raphael-ritz noted, this action would benefit from building on http://fairmetrics.org/, and also https://doi.org/10.7287/peerj.preprints.26505v1, and COUNTER.
DFG position: Taking into account the DFG’s comments to Recommendation 6, the development of metrics is regarded rather critically. At a first glance, it seems rather tempting (and potentially “simple”) to establish means of measuring the effectiveness, usefulness and quality (however, what are the right parameters?). Yet there are numerous basic questions, which need to be answered first: e.g. who is establishing the rules and methods? How are these enforced and controlled? What are the consequences in case the chosen metrics indicate a less effective (?) data repository? Finally, there is even the question whether the implementation of metrics will lead to unintended consequences.
Any attempt to implement and to use metrical methods to evaluate will produce unnecessary irritations within most scientific communities.
Contribution on behalf of the International Association of STM Publishers (STM):
Robust Metrics for FAIR Data depend on common and standardised data citation rules, see our suggestion under recommendation 4. STM and STM publishers offer to collaborate on:
Data Citation standards -- Promote and implement data citation rules and standards according to the recommendations of FORCE11, to provide credit for good data practice.”
As @raphael-ritz and @hollydawnmurray noted, this action would benefit from building on http://fairmetrics.org/ and the NIH Data Commons work on FAIR objects.
In an integrated environment, where DMPs, PIDs, repositories and other tools/services work together in a machine-actionable way, those metrics mentioned in point 1 can be automatized, as I showed in my Ph.D. thesis (http://hdl.handle.net/10261/157765)
Thumbs up. Take into consideration my remarks on Rec. #1: Definitions of FAIR and Rec. #3: A model for FAIR Data Objects. Implement this by focusing first on FAIR data in certified, trustworthy repositories.
ESO position What should these common metrics measure, and for what purpose?
Any metrics developed for FAIR Data should be explicitly and only linked to the FAIR Data principles. What is the purpose of such metrics: checking and possible certification of adherence to FAIR? And to what extent will such metrics be used for research and career assessment: obligatory for funding and career advancement? Also what is the role of such metrics within the development of next generation metrics for Open Science: a simple subset or prominent pre-condition for Open Science? Any metrics for FAIR Data should be clearly related to already existing recommendations and proposals for new metrics, as already mentioned, as well as be agreed upon by consultation with all major stakeholders.
SSI position:
There is not a clear enough description of the purpose of metrics within this recommendation. Are they to measure ‘FAIR-ness’ or are they more wide ranging?
Such top down approaches are rarely successful and of irritation as DFG commented above. Each self-identified research community taking the FAIR approach should be empowered and supported to define their own FAIR metrics and what this means for them. A high level common core of FAIR metrics might be useful to enable the definition of metric frameworks but great care needs to be take not to impose artificial targets and understanding is needed for interpretation - the metrics might not be comparable across domains. Perhaps minimum standards of FAIR-ness are more useful than metrics used as targets, again these would have to take the domain into consideration.
A set of metrics for FAIR Data Objects should be developed and implemented, starting from the basic common core of descriptive metadata, PIDs and access. The design of these metrics needs to be mindful of unintended consequences, and they should be regularly reviewed and updated.
A core set of metrics for FAIR Data Objects should be defined to apply globally across research domains. More specific metrics should be defined at the community level to reflect the needs and practices of different domains and what it means to be FAIR for that type of research. Stakeholders: Global coordination fora; Research communities.
The European Commission should support a project to coordinate the activities of various groups defining FAIR metrics and ensure these are created in a standardised way to enable future monitoring. Stakeholders: Funders.
The process of developing, approving and implementing FAIR metrics should follow a consultative methodology, including scenario planning, to minimise to the greatest extent possible any unintended consequences and counter-productive gaming that may result. Metrics need to be regularly reviewed and updated to ensure they remain fit-for-purpose. Stakeholders: Global coordination fora; Publishers; Data services.