Open sjDCC opened 6 years ago
Doesn't this recommendation subsume #9?
4TU.Centre for Research Data position: The funders should have a bigger role here, since they are requiring FAIR data. Having a final and strong statement of the funders interpretation of FAIR data (per discipline) would help the data service to define metrics better.
DFG position: As commented to Recommendations 6, 9 and 11 metric methods to assess science and the FAIRness of data sets is seen rather critical. The wish to measure is perspicuous due to its inviting ease to qualify any kind of output and of course, it is fair to search for adequate means to do so. However, in science, metric assessment did not produce better science and new findings so far and it can be expected, that metrics are not of plausible support to implement the FAIR-principles.
Any outcome of an assessment based on metrical methods bears the potential, to stall valuable initiatives simply based on (potentially) questionable numbers. That holds in particular true for attempts to introduce automatic means of metrical methodologies.
Contribution on behalf of the International Association of STM Publishers (STM): As mentioned under several related recommendations, we see 4 cornerstone components in a machine-actionable eco system for FAIR Data. Of these the folllowing is relevant for FAIR Data Metrics: Data Citation standards -- Promote and implement data citation rules and standards according to the recommendations of FORCE11, to provide credit for good data practice.
As noted on #9, this action would benefit from building on http://fairmetrics.org/ and the NIH Data Commons work on FAIR objects.
Metrics is a viable way to automatically measure the level of FAIRness of e.g. a repository. However, the FAIR principals are just that, guidelines that are intentionally vague and not specified in any level of detail. Herein lies the challenge of defining metrics that can be used to measure FAIRness. It is necessary to set a reference point. As data becomes FAIRer, the reference point will be raised and thus all metrics become devalued. There will probably be a need to introduce FAIR versions, so that data can be said to be compliant to FAIR version X.
Currently, most repositories (or datasets) will not meet the majority of machine-actionable tests, and will thus fail miserably.
Combine with Rec. #9: Develop robust FAIR data metrics #9 and perhaps Rec. #14: Recognise and reward FAIR data and data stewardship #14.
Some overlap with Recommendations 5, 6, 9, 10, 11, and 14 on FAIR Data assessment. Perhaps merge?
Agreed sets of metrics should be implemented and monitored to track changes in the FAIRness of datasets or data-related resources over time.
Repositories should publish assessments of the FAIRness of datasets, where practical, based on community review and the judgement of data stewards. Methodologies for assessing FAIR data need to be piloted and developed into automated tools before they can be applied across the board by repositories. Stakeholders: Data services; Institutions; Publishers.
Metrics for the assessment of research contributions, organisations and projects should take the past FAIRness of datasets and other related outputs into account. This can include citation metrics, but appropriate alternatives should also be found for the research / researchers / research outputs being assessed. Stakeholders: Funders; Institutions.