Open dvzrv opened 2 years ago
Okay, scratch that. The tests are actually randomly failing no matter what I do.
This is rather bad and I will not run tests at all because of this :(
With the following I can at least disable the broken tests from the subdirs:
diff -ruN a/tests/Makefile.am b/tests/Makefile.am
--- a/tests/Makefile.am 2013-12-08 02:57:05.000000000 +0100
+++ b/tests/Makefile.am 2022-02-19 16:47:41.562672003 +0100
@@ -37,7 +37,7 @@
# Used to make N-triples output consistent
BASE_URI=http://librdf.org/raptor/tests/
-SUBDIRS = rdfxml ntriples ntriples-2013 nquads-2013 turtle turtle-2013 trig grddl rdfa rdfa11 json feeds
+SUBDIRS = ntriples ntriples-2013 nquads-2013 turtle turtle-2013 trig grddl rdfa rdfa11 json
$(top_builddir)/src/libraptor2.la:
diff -ruN a/tests/Makefile.in b/tests/Makefile.in
--- a/tests/Makefile.in 2014-11-02 07:04:38.000000000 +0100
+++ b/tests/Makefile.in 2022-02-19 16:47:41.569338695 +0100
@@ -338,7 +338,7 @@
# Used to make N-triples output consistent
BASE_URI = http://librdf.org/raptor/tests/
-SUBDIRS = rdfxml ntriples ntriples-2013 nquads-2013 turtle turtle-2013 trig grddl rdfa rdfa11 json feeds
+SUBDIRS = ntriples ntriples-2013 nquads-2013 turtle turtle-2013 trig grddl rdfa rdfa11 json
all: all-recursive
.SUFFIXES:
Although it is not ideal, it at least allows me to run the remaining tests. :tada:
@dajobe fixing or at least acknowledging these test problems before a new release would have been really great :)
With 2.0.16 the tests are still broken and even with the above disabled tests I still have a flaky test (fails 50% of the time):
Testing Turtle parsing with N-Triples tests
Checking test-00.ttl Checking bad-00.ttl Checking test-28.ttl Testing turtle serialization with legal turtle
Checking test-00.ttl Checking ../ntriples/test.nt Testing turtle serialization with legal rdf/xml
Checking ../rdfxml/ex-00.rdf ok
FAILED
../../utils/rapper -q -i turtle -o ntriples ./test-00.ttl http://www.w3.org/2001/sw/DataAccess/df1/tests/test-00.ttl > test-00.res
ok
I will now also disable the turtle tests:
diff -ruN a/tests/Makefile.am b/tests/Makefile.am
--- a/tests/Makefile.am 2014-11-14 19:11:13.000000000 +0100
+++ b/tests/Makefile.am 2023-03-02 19:49:59.269974344 +0100
@@ -37,7 +37,7 @@
# Used to make N-triples output consistent
BASE_URI=http://librdf.org/raptor/tests/
-SUBDIRS = rdfxml ntriples ntriples-2013 nquads-2013 turtle mkr turtle-2013 trig grddl rdfa rdfa11 json feeds
+SUBDIRS = ntriples ntriples-2013 nquads-2013 mkr turtle-2013 trig grddl rdfa rdfa11 json
$(top_builddir)/src/libraptor2.la:
diff -ruN a/tests/Makefile.in b/tests/Makefile.in
--- a/tests/Makefile.in 2023-03-01 18:58:10.000000000 +0100
+++ b/tests/Makefile.in 2023-03-02 19:50:09.006659970 +0100
@@ -618,7 +618,7 @@
# Used to make N-triples output consistent
BASE_URI = http://librdf.org/raptor/tests/
-SUBDIRS = rdfxml ntriples ntriples-2013 nquads-2013 turtle mkr turtle-2013 trig grddl rdfa rdfa11 json feeds
+SUBDIRS = ntriples ntriples-2013 nquads-2013 mkr turtle-2013 trig grddl rdfa rdfa11 json
all: all-recursive
.SUFFIXES:
These are not broken for me, I test this regularly on several different linux distributions (Debian, Ubuntu, Fedora, Gentoo). I don't happen to have/use Arch.
So you'll need to expand on "broken".
Maybe try 'make -j1 check' to avoid any multi-threaded test running issues, although that seems to be ok these days, it just makes the output easier to read and diagnose.
Maybe try 'make -j1 check' to avoid any multi-threaded test running issues, although that seems to be ok these days, it just makes the output easier to read and diagnose.
Good catch! That seems to fix the tests (and I guess also explains the flakiness/ random behavior). Do the tests share common resources (e.g. tmp dirs etc.)?
Either way, I can drop the patch and run the tests using only one job now! Thanks! :tada:
Hi! I'm packaging raptor for Arch Linux. To be able to run tests I have to remove man files from the test setup unfortunately:
I am not sure whether these files are still useful or are failing due to changes in the dependencies of raptor. Having so many failing test files is not great though and eventually leads to downstreams disabling test suites altogether.
It would be amazing if you could create a new release (see #48 ) so that the failing files can subsequently be identified and fixed/removed.