OSTrails / FAIR_assessment_output_specification

Repository to track the requirements and specifications of FAIR assessment reports
Apache License 2.0
2 stars 1 forks source link

TestResult should not be part of a set #1

Open markwilkinson opened 3 months ago

markwilkinson commented 3 months ago

A test result should be completely agnostic of the rubric of which it is a part. Different assessment tools will assemble different tests, so there's no way for the test to know what rubric it is a member of. The ResultSet membership property is sufficient to manage this piece of information

dgarijo commented 3 months ago

The rubric is independent of the test result currently. The test result set may point to the rubric that was run in order to produce a set of test results.

A test result may be returned with other test results in a set (not always necessarily). Each result points to the test specification that produced it, and that's it.

A test result set is not mandatory. It's a convenience to bundle test results without having to repeat the same metadata again and again

El mar., 19 mar. 2024 8:44 a. m., Mark Wilkinson @.***> escribió:

A test result should be completely agnostic of the rubric of which it is a part. Different assessment tools will assemble different tests, so there's no way for the test to know what rubric it is a member of. The ResultSet membership property is sufficient to manage this piece of information

— Reply to this email directly, view it on GitHub https://github.com/OSTrails/FAIR_assessment_output_specification/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALTIGV5TL7R5M27FGGGR7LYY7UEVAVCNFSM6AAAAABE5AG4N2VHI2DSMVQWIX3LMV43ASLTON2WKOZSGE4TIMRVGYZDQMY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

markwilkinson commented 3 months ago

This is going to require that we inject additional metadata into the output of a test... which is fine, but... feels wrong.

markwilkinson commented 3 months ago

Perhaps we need to color-code the schema diagram, to show which piece of software is generating which classes/properties.

In my mind, we have a test, a "workflow engine" that is executing a set of tests (what we call an "assessment"), and the assessment is based on a rubric (which is presumably independent of the workflow engine, but used by the workflow engine).

dgarijo commented 3 months ago

In the end, what I think of is the the response that I would receive as a user/developer to do something with. For example, I run F-UJI, and I get a set of result tests. I run FOOPS! and I get a set of result tests. I run the evaluator with a rubric (if I understood correctly), and I get a set of test results.

In terms of granularity, it is true that you may have a workflow engine that runs individual tests. But from a usability point of view, you can simplify the output by stating "I assessed your resource, ran 20 tests calling this API and using this tool". If you want to return the granular provenance per test, it's also possible, but it may be repetitive. I think the current modeling supports both. If you don't want to return a resultset, then you don't.

I agree that you have a test specification, a system that runs it and the test result. I don't understand the part where you would be injecting additional metadata into the output of a test.

dgarijo commented 3 months ago

I will try to add two examples:

dgarijo commented 3 months ago

First example added. Also simplified the figure