Closed nicebread closed 7 months ago
{
indicators: {
P_Data_Open_AccessLevel: 1,
P_Data_Open_FAIR: 0,
P_IndependentVerification: 0,
P_ReproducibleScripts: 1,
P_ReproducibleScripts_FAIR: 0,
P_OpenMaterials: 0,
P_Preregistration: 1,
P_Preregistration_Content: 0.75,
P_FormalModeling: 2,
P_PreregisteredReplication: 0
},
max_score: 12,
score: 5.75,
relative_score: 0.4792
},
I updated the script. A typical scoring result looks like this.
Exzellent!
Can you also add the maximum possible score for each indicator in the output?
I also considered it. But have not done it because you did not mentioned it in the issue. I would not add the max scores to the output (they are not a part of the score, this is just meta information which stays the same for all research outputs), but write a new function, get_indicator_max_scores(type)
, that returns a list of max scores. What do you think?
Should be fine, too. In the end, I need to compute relative scores from subsets of indicators.
E.g., for OpenData:
r$> scores_data
P_Data_Open_AccessLevel P_Data_Open_FAIR P_Data_Open_AccessLevel P_Data_Open_FAIR P_Data_Open_AccessLevel P_Data_Open_FAIR P_Data_Open_AccessLevel P_Data_Open_FAIR P_Data_Open_AccessLevel
1.00 0.75 1.00 0.00 1.00 0.00 0.00 0.00 1.00
P_Data_Open_FAIR
0.25
Then I need to know how much candidates could have achieved with these indicators.
Ok, writing a separate function would not work because some indicators could be not applicable. We are actually not duplicating the information by including it in the scoring result.
We actually skip not-applicable indicators, do we need them in the output?
I added the indicator max scores to the scoring output:
Works great now!
We actually skip not-applicable indicators, do we need them in the output?
Currently, I think: No.
Can you update score.R, so that the scores for each indicator are returned? With the indicator name as variable name (e.g.,
P_ReproducibleScripts_FAIR
).I need to aggregate specific indicators to create sub-scores (e.g., all indicators related to open data).