Open michaeltlombardi opened 7 years ago
I think we would just have to rerun lines 93-94. Do you think that will solve the issue? https://gist.github.com/Midacts/be1f8ed00cb13764996a31de3e65b5ce
(Lines 109-110 show the updated code)
At a base level, yep, that's it! We may want to kick off & $Fix
in a nested Try/Catch block, but I haven't decided on that yet. Would require writing back a brief section of $Error
in the Catch.
I added in the try/catch and tested it it. I made it so if the check after the remediation fails, it will output:
How I tested: I had my netdump-settingsEnable.vester.ps1 file specify $Type = 'string'. In my config file, i changed "false" to false, therefore making vester things false was no longer a string, but a boolean.
Below is an example output. This change adds everything after "WARNING: Remediating host1.test.example.com".
I think it could be helpful to have all of this output for troubleshooting purposes. But maybe you guys might think it is overkill or too verbose.
Describing Host Configuration: NetDump-SettingsEnable.Vester.ps1 WARNING: Expected: value to be empty but it was { } WARNING: Remediating host1.test.example.com WARNING: Rerunning the 'Network Dump Enable' test against host1.test.example.com [-] Host host1.test.example.com - Network Dump Enable 429ms Expected: value to be empty but it was { } Desired: False - bool Actual: false - string Type: string at line: 134 in C:\Program Files\WindowsPowerShell\Modules\Vester\Private\Template\VesterTemplate.Tests. ps1
I will open a PR after #114 is closed, since this change will affect that file as well. The purposed changes are reflected in the below gist: https://gist.github.com/Midacts/be1f8ed00cb13764996a31de3e65b5ce
Random question, what happens if the remediation fails, and the test returns fail. Second time? Is it now in a loop? Or does it just exit after the second check?
Sorry, another question, is there a need to run the second test again? Or should we just capture if the '$fix' was successful? I think the later is cleaner.
That's the part i was testing.
It runs the test
if the test fails and -Remediate
is specified, it tries to remediate.
Then it reruns the same test to make sure it now succeeds.
it it fails again, it throws an error, basically as it would if you didnt kick off the -Remediate
.
(I added in some additional logging if the test that runs after the remediate fails).
But after that part, it continues on to the next test.
I like the second attempt. After all, "Success" doesn't necessarily mean that the fix was applied. It just means there was no decernable error that caused the try/catch to fail, right?
On Apr 9, 2017 20:34, "Justin Sider" notifications@github.com wrote:
Sorry, another question, is there a need to run the second test again? Or should we just capture if the '$fix' was successful? I think the later is cleaner.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/WahlNetwork/Vester/issues/101#issuecomment-292829643, or mute the thread https://github.com/notifications/unsubscribe-auth/AEqXUmxfqa3QU166WRnYIGijNf1BSo8xks5ruYc5gaJpZM4MgxDt .
I see what you are saying. I would say, just due to some of the testing i did today, even if $Fix succeeds, that does not necessarily mean that it will pass the test. Some other things could be misconfigured that lead to the test failing. (which is how i was able to run my test scenario above).
Hmm, I would think that since the goal is to have a specific test, with a known state, value, etc. a retest after a fix would be a waste. The fix either works or it needs to be re-coded. The only reason I'm asking is that we just spent time on 'performance', so this sounds like a step in the wrong direction as far as efficiency goes. If the fix is not working right and returns that it was successful it will be picked up in the next round of tests.
I think something that would potentially be more useful would be a rerun after all tests complete, only on the items that were remediated. Kind of a second round but the scope would be limited. It would also provide less results for someone to review potentially.
At that point I'd leave it to the higher ups to decide.
Technically it would only effect performance on remediations though.
On Apr 9, 2017 10:02 PM, "Justin Sider" notifications@github.com wrote:
Hmm, I would think that since the goal is to have a specific test, with a known state, value, etc. a retest after a fix would be a waste. The fix either works or it needs to be re-coded. The only reason I'm asking is that we just spent time on 'performance', so this sounds like a step in the wrong direction as far as efficiency goes. If the fix is not working right and returns that it was successful it will be picked up in the next round of tests.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/WahlNetwork/Vester/issues/101#issuecomment-292832471, or mute the thread https://github.com/notifications/unsubscribe-auth/ADxtkHL-m2vP3OV7sjo1VWUtyszRyHywks5ruY3OgaJpZM4MgxDt .
True, I don't think it's much time loss. But I'm wondering what the xml looks like now too. Are you able to post a snippet of that?
Oh, I'm not saying I'm right or worong, just throwing some thoughts out there.
I've never run vester with XML output before. I should try it out though. That and nunit.
On Apr 9, 2017 10:09 PM, "Justin Sider" notifications@github.com wrote:
Oh, I'm not saying I'm right or worong, just throwing some thoughts out there.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/WahlNetwork/Vester/issues/101#issuecomment-292833132, or mute the thread https://github.com/notifications/unsubscribe-auth/ADxtkDG0pwI_dGSGbK7L_UcTHxU4NUeVks5ruY9pgaJpZM4MgxDt .
XML is awesome! Said no one ever, lol. But it has its uses. I use it with Vester as a way to enhance the results and share with my team.
This is going back to adding more info in the try/catch of the remediate. I was testing out some new tests and mine failed with this:
Desired: 1500 - int Actual: 1000 - long Type: int
It was really nice to know that it needed it to be a 'long' and not an 'int'.
In this scenario, i ran a gettype() on the object and it showed that it was an 'int64' which is a long. I figured 'int' would work, but apparently not.
I've been doing some more testing on this and wanted to bring something up for discussion.
I want to preface this that I think that my findings might make it so we will not want to rerun tests if -Remediate
is specified.
Say we are running a VM test. In the desired, if we are just specifying $Object
which houses the output from Get-VM
, when you go to rerun the test if -Remediate
is specified, the test will fail.
The reason for this (and this is the whole crux of the issue) is becuase $Object
is referencing the original Get-VM
value.
If you run a test and it fails and you remediate it, the value should be updated.
Unless we rerun the Get-VM
, the test that is run will fail ~every time~.
That being said, in the $Actual
scriptblock, if we use (Get-VM $Object)
instead, it will gaurentee the values are fresh no matter what we do. The problem with this is we will see a definite increase in the time it takes to run tests.
I would personally vote against re running tests after remediate.
I would instead suggest this:
-user runs a test
-they see something fails so they rerun the test with -Remediate
-after the remediate run, they rerun the tests to make sure things are good
As a side not, i would still really like to have a more verbose output on the else {
block if a test fails in the VesterTemplate.Tests.ps1 file.
Meaning instead of what it is currently:
# -Remediate is not active, so just report the error
throw $_
to change it to:
# -Remediate is not active, so just report the error
throw "$($_.Exception.Message)`nDesired: $Desired - $($Desired.gettype())`nActual: $(& $Actual) - $($(& $Actual).gettype())`nType: $Type"
The reason being is currently, if a test fails, all we know is that it failed. The error it throws is:
Expected: value to be empty but it was { }
. I think it might be nice to know a little bit of why it failed.
When the Compare-Object
is run against $Actual
and $Desired
, if it finds any difference, it is the error Pester throws.
Here is the line:
Compare-Object -ReferenceObject $Desired -DifferenceObject (& $Actual -as $Type) | ShouldBeNullOrEmpty
Expected Behavior
When running a vester test with the
-remediate
flag, the command should re-execute the initial condition test after the remediation action - this ensures that the test results are actually as-expected post-remediation. This should be true for all published/included tests.Current Behavior
At least some tests do not rerun the comparison post-remediation.
Possible Solution
Update the template to rerun the test again - this means test-writers don't have to think about it, they get it for free from the template.
Context
This would ensure that if a remediation completes without throwing an error the test will still fail if the conditions specified are not met.
Your Environment
1.0.1
10.0.14393
5.1.14393.693