Open smirnp opened 6 years ago
No, there is no real way to check a result model. What exactly do you want to check?
Btw. the model you provide above seems to be faulty. It comprises 3 experiments http://w3id.org/hobbit/experiments#1522761033233
, _:b0
and _:b1
. The two latter are blank nodes and should not be used for experiments. I assume that the piece of code that stores the KPI values in the result model is not correct and generates these blank nodes instead of using the experiment resource. Do you have the piece of code available in github?
Thank you for help! I have found the problem of my blank nodes (missing ExperimentURI for EvalModule), which I'm calling directly from EvalStorage (to avoid unnecessary stream) without the init()
method.
I wanted to find a way how to automatically validate the result model against the benchmark model described in benchmark.ttl. While the SDK solves the most of the development problems locally, final integration after upload to the platform (validity of a result model, its compatibility with benchmark.ttl) is still a pain even for me :)
The only validation that I can think of is the following:
k
and the experiment resource e
, there is at least one triple e k o
where o
should be of the type defined for k
.e
has no additional triples attached that are not representing a KPI or a parameter.What do you think?
Hi!
Is there any way for fast checking correctness of result model for a particular benchmark? I receive the same error (the query presented below) and now use the only way to check it - via manual runs in GUI. May checking might be done via some unit tests or any other way? Thanks!