Open VladimirAlexiev opened 2 months ago
There are two different things here:
To become a REC, there must be an implementation with each feature having at least two implementations. This is frozen at the time of REC. It is showing feasibility. Normally this is via a WG endorsed test suite. From last time, that is https://w3c.github.io/data-shapes/data-shapes-test-suite/.
I don't know why it changed in January this year. It is linked to from https://www.w3.org/TR/shacl/ and as such should be frozen.
An on-going report aimed at users choosing an implementation is different to the formal test suite. It is a community issue.
For test maintenance, there is already https://github.com/w3c/rdf-tests . Sharing that for maintenance beyond the the lifetime of a working group would be good.
But it doesn't help that there's no test runner There are runners - probably one per implementation being tested across various languages.
Various implementations define their own tests
https://github.com/apache/jena/tree/main/jena-shacl/src/test/files is a copy of the W3C tests included under license. It is included in every Jena release. Relying on the web to download material in a test suite is "unreliable" :smile:. There are some addition tests but the bulk is the W3C test suite.
I understand that testing an implementation is the job of the implementor or the community, not the WG. But I'm posting this here as a focal point to gather efforts and links/ resources.
For test maintenance, there is already https://github.com/w3c/rdf-tests .
Can you elaborate? Is there some tooling in the RDF tests that can be reused for the SHACL tests?
Various implementations define their own tests
I mean in addition to the standard tests: so these can be used to enlarge the standard test suite.
For test maintenance, there is already https://github.com/w3c/rdf-tests .
Can you elaborate?
It is a way for the test suite to continue to be kept up-to-date, errata applied etc etc. That needs governance and community.
rdf-tests
provides up-to-date versions of RDF and SPARQL tests using the common test manifest (the SHACL formats is a little difference but based on the same manifest idea and vocabulary).
Is there some tooling in the RDF tests that can be reused for the SHACL tests?
The WG already provides a test format which a toolkit specific test runner can take in and execute the tests. Every implementation in the report will have done that.
I think you are asking for something else - a single/portable tool for users, not implementers, to evaluate implementations. It would run tests against any choice of implementation. That requires a common way to invoke "data+shapes" to get a conformance report and to check the conformance report against the expected outcome, then report. It needs a common invocation mechanism (which does not have to be efficient e.g. invoke a process per test) and even then, what about implementation that does not provide a CLI command? That does not sound like WG work within the current charter.
That requires a common way to invoke "data+shapes" to get a conformance report
Yes, but that's one small piece of the complete test runner.
what about implementation that does not provide a CLI command?
I can add a CLI to a SHACL validator, and small invocation scripts per validator.
But if there was a core test runner, it would greatly help the overall task. I've seen runners implemented:
I have a SHACL test runner as part of my SHACL implementation as well. This generally uses the same mechanism as the JSON-LD test runner by parsing the RDF containing test manifests into JSON-LD and then iterating over each manifest entry in the test runner. I have previously adapted a generic test runner to be able to run against an arbitrary implementation which requires that an implementation be easily installed and available through a CLI with some common options for specifying test locations and options.
Note that the RDFa working group tried to maintain an HTML test runner that would reach out to various endpoints defined by implementations to run the test suite and generate EARL reports. This proved to be very difficult to keep running properly, so my preference is for implementations to continue to be responsible for running tests on their own and send in implementation reports.
Note that the RDFa working group tried to maintain an HTML test runner that would reach out to various endpoints defined by implementations to run the test suite and generate EARL reports. This proved to be very difficult to keep running properly, so my preference is for implementations to continue to be responsible for running tests on their own and send in implementation reports.
I once proposed in rml WG to have a repo which would fetch reports from implementors. That never took off and I don't know what they are presently doing but I could rehash that idea for SHACL. The process is that a central repo+generated website exists where implementors submit PRs with links to their published reports.
We can provide an example GitHub workflow which would run your runner and then publish the report on GH Pages.
(Related to https://github.com/w3c/shacl/issues/78)
When selecting a SHACL implementation (or when writing shapes), one of the most important considerations is which SHACL features are supported.
Running such tests is up to SHACL implementors.
Notes: