Closed makxdekkers closed 4 years ago
Points raised in online meeting 3 on 18 June 2019
Here is a new version of the indicators!
Is it the protocol alone, or the protocol (processing component) working together with the rich metadata where the owner or manager has created authoisation (access cotrol rules) in the metadata?
@keithjeffery It is clear that there is a relationship between metadata on one hand and authorisation and access control on the other hand. The GO-FAIR explanation talks about specifying accessibility "in such a way that a machine can automatically understand the requirements, and then either automatically execute the requirements or alert the user to the requirements". Exactly how this can be done in existing, general or domain-specific metadata standards is not clear to me.
Makx - I have been looking at this for the EPOS-IP and EPOS-ERIC projects. Clearly there exist already software packages to perform access control - all big businesses use them. They range from simple (can this user read, update, delete an asset including execute if the asset is a service) to role-based (user X from organisation Y in role Z - where organisational policies are encoded as well as permissions) and on to temporally limited role-based access control (giving permission for a defined time period). Such systems have registries or catalogs with the relevant information. In EPOS we are looking to 'drive' any chosen access control system from the metadata catalog which means that the rich metadata has to include the required attributes/parameters to match those of the access control system such as user, organisaion, role and temporal bounds. best Keith
Please find the current version of the indicator(s) and their respective maturity levels for this FAIR principle. Indicators and maturity levels will be presented, as they stand, to the next working group meeting for approval. In the meantime, any comments are still welcomed.
The editorial team will now concentrate on weighing and prioritizing these indicators. More information soon.
The "Where necessary" is now not part of the indicators. I would formulate the negatives as "The protocol does NOT support X, and not all data use is unrestricted".
On the other hand this is very unlikely to be the case....
What is much more likely is that
@rwwh In this principle, do you again (like in your comment at https://github.com/RDA-FAIR/FAIR-data-maturity-model-WG/issues/20#issuecomment-519955211) consider 'protocol' also to include 'actions to be taken by a human reuser'?
Peter Wittenburg at https://docs.google.com/spreadsheets/d/1mkjElFrTBPBH0QViODexNur0xNGhJqau0zkL4w8RRAw/edit?disco=AAAADadg-SI
On A1.2-01D and A1.2-02D: These formulations can be misunderstood. It must b clear that the protocol itself does not do A&A
Peter, do you have a suggestion to make the formulations clearer?
@rwwh
The "Where necessary" is now not part of the indicators. I would formulate the negatives as "The protocol does NOT support X, and not all data use is unrestricted".
The 'where necessary' could be covered if the indicators were 'recommended', not 'mandatory'.
Dear contributors,
Below you can find the indicators and their maturity levels in their current state as a result of the above discussions and workshops.
Please note that this thread is going to be closed, within a short period of time. The current state of the indicators, as of early October 2019, is now frozen, with the exception of the indicators for the principles that are concerned with ‘richness’ of metadata (F2 and R1). The current indicators will be used for the further steps of this WG, which are prioritisation and scoring. Later on, they will be used in a testing phase where owners of evaluation approaches are going to be invited to compare their approaches (questionnaires, tools) against the indicators. The editorial team, in consultation with the Working Group, will define the best approach to test the indicators and evaluate their soundness. As such, the current set of indicators can be seen as an ‘alpha version’. In the first half of 2020, the indicators may be revised and improved, based on the results of the testing. If you have any further comments, suggestions regarding that specific discussion, please share them with us. Besides, we invite you to have a look at the following two sets of issues.
Prioritisation
• Indicators prioritisation for Findability • Indicators prioritisation for Accessibility • Indicators prioritisation for Interoperability • Indicators prioritisation for Reusability
Scoring
• Indicators for FAIRness | Scoring We thank you for your valuable input!