Open marbetschar opened 2 years ago
Unit testing (not involving the UI framework) can fairly easily be done using GLib.Test and the meson testing facilities - see unit tests in Files for example. Testing the UX is much harder.
@jeremypw agreed. I experimented a bit with GLib.Test and got the minimal infrastructure up & running. Probably still worth documenting, to encourage people writing tests and/or establishing a "standard way" (best practices) of organizing the code?
FWIW here's what I came up with: https://github.com/marbetschar/time-limit/commit/8518d332c47764b11b9b1214551542360204dfcd
Not sure you need the "IS_TEST" vala-arg - I did not use one but then I was only testing library code. Do you need to compile the whole source for the tests as you are only testing Utils.vala?
It is a long time since I wrote the Files unit tests so there is probably a cleaner way of doing it.
There can only be one main method in the executable, that’s why I had to „remove“ the Application‘s for testing.
I probably don‘t need to compile the whole source code, but I thought that’s the easiest way to get started which needs minimal adjustments once the test scope increases.
Regarding cleaner code: I’d love to use GLib‘s TestSuite and TestCase but was unable to figure out how that would work exactly. Having to declare the function and its name is a bit cumbersome - even though it’s quite simple.
FWIW I found some more documentation about GLib Test: https://docs.gtk.org/glib/testing.html
Was able to get GLib.TestCase
working with a small VAPI fix for GLib.Test
. This enables us to use set_up/tear_down methods for tests when needed: https://github.com/marbetschar/time-limit/pull/68/files
What's weird is the output of the test report - even though I declared 2 tests (and I verified both are executed), it always claims it ran 1/1 tests - which is obviously wrong. Any idea why?
$ ninja -C build test
ninja: Entering directory `build'
[0/1] Running all tests.
1/1 Tests OK 0.01 s
Ok: 1
Expected Fail: 0
Fail: 0
Unexpected Pass: 0
Skipped: 0
Timeout: 0
Because that count is based on the tests defined in your meson build scripts.
Are there other existing Vala projects where running the tests produce the output you expect?
@colinkiama just started out experimenting, so I'm fairly new to testing in Vala. Any hints, tips or best practices are appreciated.
Sorry, I haven't spent much time testing in Vala either. Prior art covers what I would have mentioned already.
There can only be one main method in the executable, that’s why I had to „remove“ the Application‘s for testing.
That's why I queried whether you needed to compile Application.vala into the test executable when you are only testing Util.vala - seems to make it more complicated not easier.
@jeremypw is there an easy way to remove the Applicaton.vala entry from the sources array?
The only solution I found is to explicitly list all files needed for testing and - in case we have just one Test executable - this list grows to eventually contain "everything" except Application.vala, right?
Or is it better to have one test executable for each unit we are testing and declare the sources there explicitly (which seems to add complexity but would probably improve the output)?
@marbetschar in my projects I usually split the source files like this: https://github.com/manexim/home/blob/master/src/meson.build
Or is it better to have one test executable for each unit we are testing and declare the sources there explicitly (which seems to add complexity but would probably improve the output)?
That is what I did and explicitly listed the files to be compiled. For a new project partitioning the code into testable and non-testable files from the start makes sense.
I’m back with some more testing experience 😅.
Here’s an example to that I think would be great look at: https://github.com/lcallarec/live-chart/blob/master/tests/meson.build
My opinion is that tests should be defined in a separate tests
directory. This way we can keep the build files small by taking advantage of subdir()
.
Another proposal I have is each project “output” (library, executable etc.) having their own tests
directory. So for projects that produce both a library and an app, the library’s directory would have a tests
directory and the app’s directory would also have a tests
directory.
So the library’s tests would be unit tests and the app’s tests would be mainly for integration testing and UI testing.
Lastly, I think that we should mention that the --verbose
flag should be added to the test command so developers can see what’s going on with their tests in detail. For example: People would be able to see each test function that passes until a test fails and logs will be visible.
Problem
Currently we are not using automated testing a lot - I think this is mainly due to lack of knowledge. There's probably quite some potential in avoiding regressions, especially when we use a combination of automated UI and Unit testing.
Proposal
Document the preferred way of doing UI testing as well as Unit Testing
Prior Art (Optional)