Closed fidanLimani closed 1 month ago
Hi @fidanLimani , thank you for your report.
I just tested making an assessment but didn't experience any issue. I'm wondering if you hit a momentary service interruption. If you try again and experience the same issue, it'd be helpful if you share the object and assessment your trying to make.
Will fix the typo, thanks for the report.
Thanks for the prompt response, Daniel.
I still see the same issue when trying to assess an object.
Would you please try the following:
Thanks, Fidan
On Wed, Oct 9, 2024 at 4:14 PM Daniel Clarke @.***> wrote:
Hi @fidanLimani https://github.com/fidanLimani , thank you for your report.
I just tested making an assessment but didn't experience any issue. I'm wondering if you hit a momentary service interruption. If you try again and experience the same issue, it'd be helpful if you share the object and assessment your trying to make.
Will fix the typo, thanks for the report.
— Reply to this email directly, view it on GitHub https://github.com/MaayanLab/FAIRshake/issues/171#issuecomment-2402468830, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACVZDFSS4R6TLTRYLU236LTZ2U227AVCNFSM6AAAAABPR7NDICVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBSGQ3DQOBTGA . You are receiving this because you were mentioned.Message ID: @.***>
Hi @fidanLimani
I looked into this, and it turns out the metric type was not specified for the metric: https://fairshake.cloud/metric/614/
I've updated the metric to have type yesnobut
and it's now working.
We should not allow creation of a metric without the type specified, I can fix that
Thanks for addressing it, Daniel, now the assessment is working! I seem to have missed the type for that metric ;( About the existing assessments, I was able to remove the three test ones, but two still remain ( https://fairshake.cloud/digital_object/813158/assessments/); is there a way I can remove these?
I would like to know more on the metric types, such as how they support a metric's evaluation, how a metric type maps to a score, is there a potential to use other automatic tools to assess a subset of a metrics of a rubric in FAIRShake (any such examples), etc. Ideally, a brief meeting would be great, since these questions are not code-related per se.
Kind regards,
On Thu, Oct 10, 2024 at 6:59 PM Daniel Clarke @.***> wrote:
Hi @fidanLimani https://github.com/fidanLimani I looked into this, and it turns out the metric type was not specified for the metric: https://fairshake.cloud/metric/614/ I've updated the metric to have type yesnobut and it's now working.
We should not allow creation of a metric without the type specified, I can fix that
— Reply to this email directly, view it on GitHub https://github.com/MaayanLab/FAIRshake/issues/171#issuecomment-2405621110, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACVZDFWKGN2VPVF7HRQ7CLTZ22W5VAVCNFSM6AAAAABPR7NDICVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGYZDCMJRGA . You are receiving this because you were mentioned.Message ID: @.***>
Hi @fidanLimani,
I've removed those assessments, I think they were my test ones (:
Since FAIRness exists independent of field it is essentially impossible to have effective assessments which span all of them, our thoughts are that each community must think about criteria for FAIRness in their community. FAIRshake operates as a repository of completed assessments where people can hopefully find both FAIR resources, and FAIR community standards in the form of metrics and rubrics.
Our documentation includes details on using the API to programatically register assessments on the platform; which could be done after programatically assessing objects according to your rubric. We've done this in a number of projects. I'll get back to you on whether or not I have the bandwidth for a meeting related to this but I'm happy to continue an asynchronous discussion over email.
Thanks, Daniel.
A discussion over email would be great, as it would give us time to prepare for any (additional) questions or notes to discuss. Is there a specific email address you prefer for it? I assume some of the questions might be more conceptual, too specific, or simply not issues to report.
Fidan,
On Fri, Oct 11, 2024 at 9:02 PM Daniel Clarke @.***> wrote:
Hi @fidanLimani https://github.com/fidanLimani,
I've removed those assessments, I think they were my test ones (:
Since FAIRness exists independent of field it is essentially impossible to have effective assessments which span all of them, our thoughts are that each community must think about criteria for FAIRness in their community. FAIRshake operates as a repository of completed assessments where people can hopefully find both FAIR resources, and FAIR community standards in the form of metrics and rubrics.
Our documentation https://fairshake.cloud/documentation/ includes details on using the API to programatically register assessments on the platform; which could be done after programatically assessing objects according to your rubric. We've done this in a number of projects. I'll get back to you on whether or not I have the bandwidth for a meeting related to this but I'm happy to continue an asynchronous discussion over email.
— Reply to this email directly, view it on GitHub https://github.com/MaayanLab/FAIRshake/issues/171#issuecomment-2407975183, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACVZDFSO76IH7WEMRKYBXFLZ3AOFLAVCNFSM6AAAAABPR7NDICVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBXHE3TKMJYGM . You are receiving this because you were mentioned.Message ID: @.***>
@fidanLimani you can use my email danieljbclarkemssm@gmail.com for correspondence related to this.
Server error We've reported this error and will try to understand why it occurred and fix it soon so it doesn't happen again. Sorry for any inconvenience.
Also, a type in the original message - "occured" instead of "occurred".