pediapress / mwlib

mediawiki parser library
103 stars 35 forks source link

test suite in mwlib-0.13.7 errors out for one apparent reason #15

Closed idella closed 12 years ago

idella commented 12 years ago

Having all the new deps for this version installed, the test suite should have all that is needed. it appears to run them all with success but from the ebuild it bails out claiming the test phase failed.


creating build-2.7/scripts-2.7 copying and adjusting sandbox/nslave.py -> build-2.7/scripts-2.7 copying and adjusting sandbox/nserve.py -> build-2.7/scripts-2.7 copying and adjusting sandbox/postman.py -> build-2.7/scripts-2.7 changing mode of build-2.7/scripts-2.7/nslave.py from 644 to 755 changing mode of build-2.7/scripts-2.7/nserve.py from 644 to 755 changing mode of build-2.7/scripts-2.7/postman.py from 644 to 755

Source compiled.

  • Testing of dev-python/mwlib-0.13.7 with CPython 2.7...

PYTHONPATH=build-2.7/lib.linux-x86_64-2.7 py.test ============================= test session starts ============================== platform linux2 -- Python 2.7.2 -- pytest-2.2.3 collecting 0 items * ERROR: dev-python/mwlib-0.13.7 failed (test phase):

However; run from source directory it appears to achieve success.

archtester mwlib # cd /mnt/gen2/TmpDir/portage/dev-python/mwlib-0.13.7/work/mwlib-0.13.7/tests/ archtester tests # cd ../ archtester mwlib-0.13.7 # py.test -v tests =============================================== test session starts =============================================== platform linux2 -- Python 2.7.2 -- pytest-2.2.3 -- /usr/bin/python2.7 collecting 389 itemsarchtester mwlib-0.13.7 #

I have no idea where to look into this seeing it offers no debug output since it seems to have not tripped over any error. It may be from tests, which require installed mw-zip script.. This is just the type of thing that plays havoc with the sandbox structure setup by portage. The test suites would be much more distro friendly by not doinf such things.

schmir commented 12 years ago

idella writes:

Having all the new deps for this version installed, the test suite should have all that is needed. it appears to run them all with success but from the ebuild it bails out claiming the test phase failed.

why not ask the people who maintain the ebuild to fix that? we won't fix it! sorry, no gentoo here.

idella commented 12 years ago

why not ask the people who maintain the ebuild to fix that?

Because I am 'them'. I have pinned it on another look. This is the second instance I have found.

Here's a sample tests/test_treecleaner.py:340: test_fixNesting6 PASSED tests/test_treecleaner.py:357: test_swapNodes PASSED tests/test_treecleaner.py:366: test_splitBigTableCells xfail tests/test_treecleaner.py:376: test_fixParagraphs xfail tests/test_treecleaner.py:383: test_cleanSectionCaptions PASSED tests/test_treecleaner.py:394: test_cleanSectionCaptions2 PASSED tests/test_treecleaner.py:407: test_removebreakingreturnsInside PASSED tests/test_treecleaner.py:426: test_removebreakingreturnsOutside PASSED tests/test_treecleaner.py:454: test_removebreakingreturnsMultiple PASSED tests/test_treecleaner.py:473: test_removebreakingreturnsNoremove xfail

As I suspected, it's bailing out on the xfails, the tests that are marked expected to fail. testing() { exit_status=0

for test in tests/test_*.py

    do
            py.test $test -x || exit_status=1

if exit_status=1;then

echo "the test culprit is $test"

return $exit_status

fi

    done

return $exit_status

      }

This simple bash script exposed it. Uncomment the above commented code it will exit on the very first test_ file in the queue. When run in isolation it doesn't matter that it returns a non zero value. Put it in an ebuild which runs phases like this && that in bash and it aborts; failed tests; yields a failed emerge.

"we won't fix it! "

Well even if you don't, we can skirt it. At least 'we' have told you.

schmir commented 12 years ago

idella writes:

why not ask the people who maintain the ebuild to fix that?

Because I am 'them'. I have pinned it on another look. This is the second instance I have found.

;)

As I suspected, it's bailing out on the xfails, the tests that are marked expected to fail.

    for test in tests/test_*.py
    do
            py.test $test -x || exit_status=1

if exit_status=1;then

echo "the test culprit is $test"

return $exit_status

fi

    done

return $exit_status

This simple bash script exposed it. Uncomment the above commented code it will exit on the very first test_ file in the queue. When run in isolation it doesn't matter that it returns a non zero value.

I think your shell script is broken (the if condition seems to be always true).

Put it in an ebuild which runs phases like this && that in bash and it aborts; failed tests; == Failed emerge.

"we won't fix it! "

Well even if you don't, we can skirt it. At least 'we' have told you.

I can see 2 failing test cases (test_redirect_kolumne_xnet/test_redirect_canthus_xnet). These are disabled if you run the tests with "XNET=1 py.test ...". If you think there are other problems with the test suite, please send us the output of

XNET=1 py.test --tb=short -v

Cheers Ralf

idella commented 12 years ago

I think your shell script is broken (the if condition seems to be always true).

EEErm; mb that should have been py.test -v $test || exit_status=1 I changed it around a bit. py.test -v $test on success should return a 0, on a fail being triggered flows to || exit_status=1. The test files that passed didn't set exit_status to 1 before ( I think). but now it's proving you right. py.test -v $test is intended to return zero if successful, it appears to be returning non zero unconditionally. That defeats the purpose.
Anyway, the XNET=1 is was I was looking for. We can get a successful test phase now, but by only one means and not another preferred means which still leaves questions. Some more testing to do later.