Following a brief discussion on Discord and the enthusiasm towards the idea of automated tests but lack of knowledge in how to proceed, I thought I'd whip up something to spark some conversation.
The goal:
Do not require a big lift (e.g. big refactor into a modern .Net app or something) to get started.
Do not penalize the "production build" (the servers people are running) if test coverage increases (minimal performance loss/download size increase)
The core idea is pretty simple:
The test harness provides a new compilation entrypoint with its Main method.
The compiler can build it (a handy shortcut is provided in test.bat) and the executable that gets built will only run tests and report which succeeded and which failed. It also incidentally produces a .pdb file atm for debugging as a result of using the debug compiler flag.
If build.bat is run, the testing apparatus should be excluded from the build so tests shouldn't impact the "production" size or performance.
To add a new test for the test harness to run, write a static bool method that takes no parameters which performs the test you want to run and return true if the test passed and false if it failed, then annotate it with [TestMethod] from the Server.Tests namespace.
Some ideas/questions/concerns/areas for improvement:
Right now I think it only detects code in the Source directory. I didn't have luck getting it to detect tests in the Items directory, but I'd like for the Data directory to be covered if I can figure out how.
Maybe extend the output structure to incorporate an error message or some indication of which part of the test failed? Or perhaps keep it simple to encourage small tests?
Does requiring the test methods to be static leave certain scenarios untestable? Should the attribute target be something different?
Think about how to make mocking/stubbing lightweight for test-writers. Ideally any work there stays in the tests namespace and they just benefit from it without it becoming something that spreads its tendrils across the codebase. And ideally if/when new interfaces are created, mocking them doesn't require a lot of manual work. Seems like some sort of reflection sorcery should enable the automatic generation of delegates or something? Dunno, need to hack on that.
All of this is being done with very minimal C# experience, so there are probably smarter ways to do things.
Eventually, once all the test authoring stuff is good, the reporting should also be improved so a CI pipeline has something to pass/fail builds with. Right now my poor man's version of that is the output strings produced by Test.exe.
Following a brief discussion on Discord and the enthusiasm towards the idea of automated tests but lack of knowledge in how to proceed, I thought I'd whip up something to spark some conversation.
The goal:
The core idea is pretty simple:
Main
method.test.bat
) and the executable that gets built will only run tests and report which succeeded and which failed. It also incidentally produces a .pdb file atm for debugging as a result of using thedebug
compiler flag.build.bat
is run, the testing apparatus should be excluded from the build so tests shouldn't impact the "production" size or performance.Server.Tests
namespace.Some ideas/questions/concerns/areas for improvement:
Source
directory. I didn't have luck getting it to detect tests in theItems
directory, but I'd like for theData
directory to be covered if I can figure out how.