I've copied the text and added checkmarks for progress tracking and/or comments -- will update as progress is made
Suggested additions to current documentation
[x] We'd like for this recipe to help users 1) do an assessment and 2) interpret the output of the assessment so they can determine how to move forward with improving their FAIR score. We read through this recipe, but were unable to find obvious answers to the following questions:
[x] How does FAIRshake evaluate interoperability?
[x] How can I increase a specific FAIR score?
[x] How do I read/understand the FAIR insignia?
[x] How do I dispute/correct/change/question my FAIR rating?
Addressed these in a new "motivation" section after the background
[x] We suggest adding a new use case scenario section/framework to the recipe to answer these questions with an example researcher. For example, Janice, a researcher at a Common Fund program, has just completed an assessment using the automated CFDE rubric metrics on FAIRshake for a metadata object. We'd like this section to explain:
[x] how she should interpret the resulting FAIRshake insignia (e.g., provide an example insignia and walk us through interpretation of the different boxes, colors, percents (displayed when mouse hovers over box))
[x] where she should look on the FAIRshake website/insignia to determine the interoperability of her assessed metadata
[x] what outputs she should look at to understand how FAIRshake arrived at the numerical scores of the assessment (e.g., is there a graphical/tabular output to look at?)
There is at the project level available at the project landing pages, will try to capture this information in the recipe
[x] how she can improve the overall and component FAIR scores (e.g., understand why persistent identifier was scored as 11% and how to make it higher)
[x] how she can dispute/correct/change/question the FAIR score (e.g., how to ask for help?)
[x] Some of the information in the FAIRshake Documentation page/youtube videos would be very useful to add to this recipe (e.g., anatomy of insignia figure).
the anatomy of the insignia has been added to the recipe. the mentioned information in the recipe has now been linked
[x] We also recommend making the Documentation page a link on the top navigation bar of the FAIRshake website so it is more prominent; it's easy to miss at the bottom of the page!
Suggestions and questions for the current documentation
Check hyperlinks
[x] When the website is opened in an incognito web browser page, some of the website links on the fair cookbook recipe are broken:
[x] "transformed to DATS" link to a Github repo
[x] "You can refer to the scripts here for examples on how you can accomplish this" link
This is a private repository as it contains information that has yet to be reviewed by the DCCs. Someone please advise The links have been replaced with a section describing why the CFDE FAIR repo is private, what it contains, and how to access it.
[x] There are some links to the glossary page. It would be helpful to link to the entries instead of the top of the page:
e.g., for "FAIR", "FAIRshake", "RDF", "JSON Schema", "JSON", "DATS"
[x] There seem to be two processes documented in this recipe - conducting an assessment and publishing assessment metrics. A use case example for each of the ingredients and processes would help clarify the objectives of the recipe.
[x] For example, for "Machine-readable (ideally standardized) metadata description for enabling automated assessments" - Can you provide an example and where the metadata might come from?
FAIR metrics by fairmetrics.org
"To perform an assessment with this rubric, we'll need something to assess. For this, you can find a digital object already in FAIRshake or register your own (with 'create object' at the bottom of the rubric)."
[x] The figure above this sentence shows the rubric page. It does not have a search bar. The figure below this sentence has a search bar. Could you clarify how the user can navigate from the first screenshot page to the next?
It's the home page, the first screenshot is meant to be independent of the subsequent screenshots--i'll clarify the workflow
[x] Could you add a screenshot of the "Create New Digital Object" button? How does one navigate to that page?
We also improved FAIRshake to make this more obvious, especially when you're not logged in
[x] It would be very helpful to users if you could add a few tutorial steps with answers to the questions in the manual rubric, including steps users could try with a specific digital object.
[x] It would also help if you could include screenshots from the digital object's interface/website where these answers can be found. For example, if you are evaluating the GTEx portal, where would you look for a "persistent identifier".
Automated rubric for DATS
Here are some questions that we had while trying to understand the automated rubric:
[x] What's the first step? What's the input file?
[x] Where should I run this code? Or what should I do with the script?
[x] Where do I see the FAIR score?
Some effort was put into describing this specifically for the C2M2 automated assessment, and the script to do it can be used directly on a datapackage and has been described in the recipe. We're happy to further clarify/simplify this if additional questions arise.
[x] It would be very helpful to add step-by-step tutorials with screenshots of input files, outputs, fair scores and their interpretation in this documentation.
[x] The details from the Preparing to perform automated assessments section onwards is quite detailed with technical information, however a new user is unlikely to have enough context to understand how to apply this information. Adding more specific examples about when and how this part of the recipe should be used would be very helpful.
this section has been split up into performing an automated assessment, with the practical use case of C2M2 assessment from the existing scripts in the FAIR repo and the original more broad constructing FAIR assessments with the case study of C2M2.
This recipe has become somewhat of a behemoth given the fact that it is being tasked with tackling many different concepts. It may make sense to break up this recipe into some smaller more focused discussions. topics include FAIR in general, FAIR in the context of the CFDE & C2M2, Manual FAIR assessments, Automated FAIR Assessments, and the landscape of FAIR metrics... Thoughts about how this can be made more easy for people reading the recipe to follow are welcome.
Work in progress to address the comments in https://github.com/nih-cfde/the-fair-cookbook/issues/44
I've copied the text and added checkmarks for progress tracking and/or comments -- will update as progress is made
Suggested additions to current documentation
[x] We'd like for this recipe to help users 1) do an assessment and 2) interpret the output of the assessment so they can determine how to move forward with improving their FAIR score. We read through this recipe, but were unable to find obvious answers to the following questions:
[x] We suggest adding a new use case scenario section/framework to the recipe to answer these questions with an example researcher. For example, Janice, a researcher at a Common Fund program, has just completed an assessment using the automated CFDE rubric metrics on FAIRshake for a metadata object. We'd like this section to explain:
[x] Some of the information in the FAIRshake Documentation page/youtube videos would be very useful to add to this recipe (e.g., anatomy of insignia figure).
[x] We also recommend making the Documentation page a link on the top navigation bar of the FAIRshake website so it is more prominent; it's easy to miss at the bottom of the page!
Suggestions and questions for the current documentation
Check hyperlinks
Ingredients section
FAIR metrics by fairmetrics.org
"To perform an assessment with this rubric, we'll need something to assess. For this, you can find a digital object already in FAIRshake or register your own (with 'create object' at the bottom of the rubric)."
Automated rubric for DATS
Here are some questions that we had while trying to understand the automated rubric:
[x] It would be very helpful to add step-by-step tutorials with screenshots of input files, outputs, fair scores and their interpretation in this documentation.
[x] The details from the Preparing to perform automated assessments section onwards is quite detailed with technical information, however a new user is unlikely to have enough context to understand how to apply this information. Adding more specific examples about when and how this part of the recipe should be used would be very helpful.
This recipe has become somewhat of a behemoth given the fact that it is being tasked with tackling many different concepts. It may make sense to break up this recipe into some smaller more focused discussions. topics include FAIR in general, FAIR in the context of the CFDE & C2M2, Manual FAIR assessments, Automated FAIR Assessments, and the landscape of FAIR metrics... Thoughts about how this can be made more easy for people reading the recipe to follow are welcome.