FAIR-Data-EG / Action-Plan

Interim recommendations and actions from the FAIR Data Expert Group
Other
5 stars 1 forks source link

Rec. 29: Implement FAIR metrics #29

Open sjDCC opened 6 years ago

sjDCC commented 6 years ago

Agreed sets of metrics should be implemented and monitored to track changes in the FAIRness of datasets or data-related resources over time.

AlasdairGray commented 6 years ago

Doesn't this recommendation subsume #9?

ghost commented 6 years ago

4TU.Centre for Research Data position: The funders should have a bigger role here, since they are requiring FAIR data. Having a final and strong statement of the funders interpretation of FAIR data (per discipline) would help the data service to define metrics better.

katerbow commented 6 years ago

DFG position: As commented to Recommendations 6, 9 and 11 metric methods to assess science and the FAIRness of data sets is seen rather critical. The wish to measure is perspicuous due to its inviting ease to qualify any kind of output and of course, it is fair to search for adequate means to do so. However, in science, metric assessment did not produce better science and new findings so far and it can be expected, that metrics are not of plausible support to implement the FAIR-principles.

Any outcome of an assessment based on metrical methods bears the potential, to stall valuable initiatives simply based on (potentially) questionable numbers. That holds in particular true for attempts to introduce automatic means of metrical methodologies.

Eefkesmit commented 6 years ago

Contribution on behalf of the International Association of STM Publishers (STM): As mentioned under several related recommendations, we see 4 cornerstone components in a machine-actionable eco system for FAIR Data. Of these the folllowing is relevant for FAIR Data Metrics: Data Citation standards -- Promote and implement data citation rules and standards according to the recommendations of FORCE11, to provide credit for good data practice.

Drosophilic commented 6 years ago

As noted on #9, this action would benefit from building on http://fairmetrics.org/ and the NIH Data Commons work on FAIR objects.

ajaunsen commented 6 years ago

Metrics is a viable way to automatically measure the level of FAIRness of e.g. a repository. However, the FAIR principals are just that, guidelines that are intentionally vague and not specified in any level of detail. Herein lies the challenge of defining metrics that can be used to measure FAIRness. It is necessary to set a reference point. As data becomes FAIRer, the reference point will be raised and thus all metrics become devalued. There will probably be a need to introduce FAIR versions, so that data can be said to be compliant to FAIR version X.

Currently, most repositories (or datasets) will not meet the majority of machine-actionable tests, and will thus fail miserably.

ferag commented 6 years ago

http://hdl.handle.net/10261/157765

pkdoorn commented 6 years ago

Combine with Rec. #9: Develop robust FAIR data metrics #9 and perhaps Rec. #14: Recognise and reward FAIR data and data stewardship #14.

mromanie commented 6 years ago

ESO position See Rec06, Rec09 and Rec11.

gtoneill commented 6 years ago

Some overlap with Recommendations 5, 6, 9, 10, 11, and 14 on FAIR Data assessment. Perhaps merge?