chrisosaurus / dodo

scriptable in place file editor
MIT License
4 stars 1 forks source link

More comprehensive testing #10

Open chrisosaurus opened 9 years ago

chrisosaurus commented 9 years ago

It would be nice to have some more complete and complete testing to increase my confidence in dodo being correct

phillid commented 8 years ago

Just a passing thought: If we can prove that something like b0 e/foo/ always works, could we use this to perform the checking of the file matches the expected value? This style of testing would be more compact since we would have the checking implicitly done by the dodo script which is being tested.

I understand it would be "risky" unless we could prove it is always correct. I also realise this couldn't be used for testing error cases.

chrisosaurus commented 8 years ago

@phillid I think that kind of testing done 'internally' to the tests is worthwhile, however I think we also want some 'external' testing, note that this can be as simple as having an 'expected' final file that we compare to (in perl or bash), similar to what I do in my other projects: https://github.com/mkfifo/icarus/blob/master/t/custom/test_lex_example.pl

These can be cheaply generated by running your test script once, manually checking the file, and then making it the expected output.

I was thinking we could have a t/ directory full of:

  foo.in
  foo.out
  foo.dodo
  bar.in
  bar.out
  bar.dodo

we can then just iterate through them

# untested code follows, meant to give the general idea rather than an actual solution
for infile in t/*.in; do
    base=`echo $infile | sed 's/\.in$//g'`
    testfile="$base.tmp"

    echo "testing $base"
    cp $infile $testfile

    dodo $testfile < "$base.dodo"

    diff $testfile "$base.out"
    ret=$?
    if [ $ret -ne 0 ]; then
        echo "test failed for $base"
        exit 1
    fi
done
chrisosaurus commented 8 years ago

@phillid thanks for making me think about this, I have just pushed 27c1e316503534ebf307a481cda757a1c8e25f3a to put this into place

note that my first test case is the one you discovered from https://github.com/mkfifo/dodo/pull/19#issuecomment-147675975

I currently have the expected .out as 'ahing' which is not what we want, once we have fixed issue #19 then we should also correct this test

chrisosaurus commented 8 years ago

I might eventually extend this to also capture the stdout of the dodo invocation and have a foo.stdout which I compare this to, so that we can test p (print)

phillid commented 8 years ago

Cool, the foo.{out,in,dodo} is exactly the structure I was drafting this way. I have a small collection of tests which I will add to and submit eventually

chrisosaurus commented 8 years ago

@phillid awesome! keen to add some more testing, so far my ones are very basic.

chrisosaurus commented 8 years ago

@phillid has done some awesome work on this, the test collection is looking good.

phillid commented 8 years ago

gcov is telling me test coverage is roughly 70%. Most of the uncovered code is in error cases, which would make sense seeing as the test suite only supports confirmation of "correct" behaviour.

This makes it concerning to know that these error cases remain "untested". Some error cases are harder to trigger than others. For example it is easy to make dodo unable to open the working file, but a bit harder to get a null pointer passed to the parse_* family of functions.

Perhaps a test suite which could expect failures would be useful. I have knocked together such a script before for one of my own projects, but I'll see if I can tidy it up before starting to think about applying it to dodo.

chrisosaurus commented 8 years ago

The way I usually address this is 2 part:

1) create tests that are expected to fail, and verify that the way exactly in the way expected (so the same .in and .out mechanism we use for positive tests) This could be as easy as breaking t/ into t/positive (expected to pass) and t/negative (expected to fail)

2) create unit tests - in c - that purposefully call the functions with invalid arguments and check that this is caught. In some of my projects this can be very malicious, as I write unit tests to try trigger every possible error.

chrisosaurus commented 8 years ago

Also worth mentioning that we can see coverage on master by looking athttps://coveralls.io/github/mkfifo/dodo

which is linked to from https://github.com/mkfifo/dodo#dodo--