Closed michalporeba closed 5 years ago
I'm not entirely sure I got the point of your OP, but a few things stood out to me on first read.
I'm looking for an easier way to write unit tests for tests
In my experience testing code that tests production code, involves basically the same concerns as testing production code. For example, I have a fairly substantial module that is mostly implementations of tests specific to DSC resources. That module for example, implements Assert-ParameterOrdinality
, tests it here, and used it here. The purpose-built test commands implemented in that module are implemented and tested in the same manner as production code but used inside It{}
blocks place of Should
.
Any mistake in the code and "Test my check" passes, as there is an exception thrown.
This seems rather like a straw man. If you are testing for a particular cause of an exception, you would use some matching arguments with -Throw
:
{ Confirm-MyCheck } | Should -Throw 'specific message' -ExceptionType ([SpecificExceptionType])
I would like to be able to do something like in the It script blocks { Confirm-MyCheck } | Should -Skip ...
It seems like you could implement this sort of functionality by defining, for example, a function called ShouldSkip
. ShouldSkip
would invoke the scriptblock, perform whatever checks it needs to on the result, and throw whatever assert to Pester to achieve the right outcome. I implement such specific-purpose functions regularly in my projects. I suspect you could even use Add-AssertionOperator
to get the Should -Skip
syntax.
To make it even better the test cases could use some convention to quickly do a lot of tests.
It seems to me that with the right arguments to a purpose-built assert command, you can achieve what you are describing.
I've read your blog post (thanks for the shout-out), and I am also not entirely sure what you are after.
Should -Pass
which would ensure there are some Should calls in the Confirm-MyCheck and that exceptions are not thrown.
The assertions are based purely around exceptions, the framework does not really care whether or not you fail the test by using Should
or just throwing an exception. There is no proof that the test actually did anything, a test calling empty function is not distinguishable from a test that calls a function that does not throw.
Another thing the puzzles me is how { Confirm-MyCheck } | Should -Throw
proves that the check is correct. When there are no parameters, and the code always throws, how can you use it?
OK, so the Should -Pass
was there because i wrote it before I read the Pester code properly. No I know that not implemented test is marked as pending, and really what I'm after is that the test is not pending. I wanted to have a way to see if the test was implemented.
The other puzzle is the { Confirm-MyCheck } | Should -Throw
. Since writing the checks I considered out of scope for those examples I simplified them, perhaps too much. Typically there would be more, in dbachecks it probably look more like
`$mockdata | Confirm-MyCheck -With $config -Because "of a good reason" } | Should -Throw`
Although, since we are talking about environmental checks, it is possible to imagine, that Confirm-MyCheck
is calling some functions which get configuration from somewhere and test internally something, like percentage of free disk space, and for the unit test we mock the functions used internally.
Anyway, I digress, I wanted to focus on the Should rather than passing the variables to my check in the example. Perhaps I oversimplified it.
What am I after here? As you say the framework is based on exceptions and it does not care how the exception is thrown. But I do care. I would like be able to distinguish if the exception has the ID of PesterAssertionFailed, or PesterTestInconclusive, or PesterTestSkipped, or just a random exception. When I unit test my environmental checks I need to make sure that it skips when needed, that it fails when conditions are right, and that it doesn't just randomly error due to bad coding.
Examples @alx9r provided in the links are very helpful, and I've learnt a lot for that approach, but while they are good for a software developer they are not necessarily easy to read for your average system administrator, and I think it is very important for the environmental checks to be very human readable for easier adoption in teams with mixed skills, or perhaps even by DBAs, sysadmins with little powershell knowledge.
I appreciate it might be difficult to see my point, as it is purely about environmental checks, not unit tests as such. Perhaps what I should do is focus on the other change regarding skipping from inside the script block, and then once I know the code better I will create a quick PoC to show what I really want, and how it could be implemented, and then the examples will be more real. Perhaps that will make the case stronger. (In my head it is very reasonable and with good use cases ;) )
As soon as I started thinking of improving #1022 I found an example of what I really want to do easily.
I'm working on Set-ItResult
, so to see if it works I have
Describe "Working on Set-ItResult" {
It "This test should be skipped" {
Set-ItResult -Skipped -Because "we want it to be skipped"
}
}
and I can see the test skips, and the output is correct. But now I'd like to validate it with a unit test (and here we can omit the logic, the test should always skip (imagine my Confirm-MyCheck
contains only that Set-ItResult
for now)
So what I would like to do here is
Describe "Unit testing Set-ItResult" {
It "Set-ItResult -Skipped should skip a test" {
{ Set-ItResult -Skipped } | Should -Skip
}
}
Does it make any more sense?
Reading the source code some more I found that @nohwnd added Verify-AssertionFailed 4 months ago. I can see now I could write my skipping tests as something like this
Describe "Unit testing Set-ItResult" {
It "Test skipping" {
try { Set-ItResult -Skipped -Because "we are testing" }
catch {
$psitem.FullyQualifiedErrorID | Should -Be "PesterTestSkipped"
}
}
}
so perhaps what I'm after is just some syntaxic sugar to make that test easier to write, like the last code bloc from the previous comment? What do you think about that?
Should -Skip
Should -BeInconclusive
Should -Fail
Should -Succeed #yes, it is back here, just to verify the test is not marked as pending
Oh nice, you found my Verify-AssertionFailed
, flashing out that concept in my head took me a while, but it greatly simplifies testing of assertions, and in the end it is logical to do it this way. :)
In that case the first three assertions just boil down to a specialized case of Should -Throw
. To assert on FullyQualifiedErrorId
you can use the -ErrorId
parameter, or -PassThru
the error object and assert on it.
Describe "Unit testing Set-ItResult" {
It "Test skipping" {
# do 'like' match
{ Set-ItResult -Skipped -Because "we are testing" } | Should -Throw -ErrorId "PesterTestSkipped"
# or do exact match
$err = { Set-ItResult -Skipped -Because "we are testing" } | Should -Throw -PassThru
$err.FullyQualifiedErrorID | Should -Be "PesterTestSkipped"
}
}
You can put code in a function, and call Should -Throw
from it, and register it with Should
as a custom operator.
This still does not test the actual outcome of the test (which the last operator of yours probably need's to do), only that an assertion with the proper id is thrown, whether or not the framework will react to it and produce a correct result is a second thing. For such cases we run the test in a separate job and assert on the output, but such tests are slow and hard to debug. To achieve simpler testing of such cases the It
logic would have to be refactored to de-attach from the global pester state. Which might not be that difficult to do, but at the moment there is not much need for it :)
Yes, the | Should -Throw -ErrorId "PesterTestSkipped"
gets very close to what I need and it is almost as simple as I need it to be. If I register it as a custom operator I think it will give me what I need but I leaves one little problem: an external code which is dependant on internal magical string "PesterTestSkip".
I can live with that I guess, and I'm going to implement it one way or another, the only question remains (in my opinion) is, whether you think anybody else could benefit from it, and if there is room for those additional Should
options in the project or not. If it is worth it I can prepare a PR implementing those changes, if not I'll just implement them in dbachecks as custom operators.
As for the pass, at this level I think I'll be happy if I know the test is not pending and it does not throw.
So for now I have implemented the functionality I need in dbachecks as a custom assertion. Please have a look here https://github.com/sqlcollaborative/dbachecks/blob/ccc/internal/functions/PesterAssertions.ps1 https://github.com/sqlcollaborative/dbachecks/blob/ccc/tests/PesterAssertions.Tests.ps1
Workaround was found, closing this.
Hello, so what I ended using the 4 custom assertions from the comment above a lot.
Of course there is the workaround but I find it much easier when I'm writing tests for my environmental checks to be able to do
| Should -Pass
| Should -Skip
| Should -Fail
| Should -BeInconclusive
I don't know if there is any appetite (or need) for it but I'd be happy to contribute those if anybody else could find it useful.
General summary of the issue
I'm looking for an easier way to write unit tests for tests, both positive and negative tests, as well as ensuring tests are skipped or marked as inconclusive when necessary. For more context please see this blog post.
Expected behaviour
Positive tests
Positive tests appear to be fairly easy. Using my confirm functions (as described in that blog post) I can then test the test doing something like this:
But is that really a good test? All I have to do is create an empty function Confirm-MyCheck and the test passes. I'm not validating if the test does any assertions successfully. I'm simply testing if there are no exceptions. I would like to be able to do something like this '''
which would ensure there are some
Should
calls in theConfirm-MyCheck
and that exceptions are not thrown.Negative tests
Negative tests are even worse
Any mistake in the code and "Test my check" passes, as there is an exception thrown. Worse still, if I use
Set-TestInconclusive
my test will pass too. We are now talking (#1022) about potentially using exceptions to skip the tests from inside the tested block, which again, will make my negative test pass, when it probably shouldn't.I would like to be able to do something like in the It script blocks
Tests would do what they should, and not pass or fail by lack of implementation or any coding error. It would also be easier to write more explicit tests. Removing the need for
{ ... }
before| Should
would be even better if possible.EDIT:
To make it even better the test cases could use some convention to quickly do a lot of tests. Imagine we are testing writing a function that reverses a string. We want to mark the test as inconclusive when the word is a palindrome, skip if it is empty, succeed when it is correct, and fail when it is null.
I intentionally missed the
param($Input, $ExpectedResult)
as it would be nice to be able to not have it explicitly declared if we are using conventions like that.The output should read something like
Here you can see the extra messages. I think the skipping and setting tests as inconclusive should take
-Because
parameter asShould
does to give more information to the user.Possible Solution
(working on it)