pester / Pester

Pester is the ubiquitous test and mock framework for PowerShell.
https://pester.dev/
Other
3.11k stars 473 forks source link

RFC: Better API for `Invoke-Pester` #1265

Open nohwnd opened 5 years ago

nohwnd commented 5 years ago

I am trying to simplify the interfaces on Invoke-Pester, to create a pit of success, where the usual stuff is simple easy to discover, and advanced stuff is less simple and less easy to discover. The way Pester v4 does that (to some extent) is by hiding hashtables behind parameters, this gives you a lot of freedom, but hardly a clean strongly typed interface. What I would like is a simple interface to be easy to start with, and the configurations to be explicit so I don't have to remember structure of hashtables. It would still be nice to have hashtables, so they can be used as literals in the code, modified and passed around easily.

Simple

What I am assuming is that a typical person has a folder with tests, and want's to do some basic filtering on them. They also want to run them locally to develop, and run them on a build server easily. To me the simplest non restrictive parameter set is:

Invoke-Pester 
 -Path <string[]>
 -ExcludePath <string[]>
 -Name <string[]>
 -ExcludeName <string[]>
 -Tag <string[]>
 -ExcludeTag <string[]>
 -PassThru <switch>
 -Parallel <switch>
 -Watch <switch>
 -Ci <switch>
 -Output <Normal, Short, None>

This would cover 99% of what I do. 😁

This minimal api would then be typically used like this:

Invoke-Pester
Invoke-Pester -Watch
Invoke-Pester -Path C:\projects\mymodule
Invoke-Pester -Path . -Ci
Invoke-Pester -Path . -Ci -PassThru | Export-NunitXML -Path "TestResults.xml"
Invoke-Pester -Tag "Integration*" -Ci -PassThru | 
    Export-NunitXML -Path "TestResults.xml"

Invoke-Pester -Path "*.Integration.Tests.ps1" -Ci -PassThru | 
    Export-NunitXML -Path "TestResults.xml"

Invoke-Pester List of paths to run *.Tests.ps1 from. When the path is a folder it will run all *.Tests.ps1 files in it and its subfolders. When the path is a wildcarded file like c:\tests\*.ps1 it will run all *.Tests.ps1 in that folder, if that path is a full path, it will run the path, even if it is not a .Tests.ps1 file. Defaults to the current directory "." (Just like in v4). -Path <string[]>

Excludes paths from the test run, it uses -like wildcards to match the paths, and automatically replaces '\' to '/' to make it work cross-plaform easily. -ExcludePath <string[]>

Filters on Block and Test name, uses . as a delimiter for the path. I think this is a useful and easy to use convention. For example having a test 'it1' in 'describe2'that is inside of 'describe1' would give the test a full path of describe1.describe2.it1, and we can then filter like this: 'it1' to run just it, or describe1.* to run everything in that block, or *.describe2.* to run everything in describe2 block no matter where it is placed. -Name <string[]> Same as above, but it excludes the tests that match. Exclude takes precedence before any other filter. -ExcludeName <string[]>

Filters blocks and tests based on their tags. If any tag matches the test will run. Accepts like wildcards. -Tag <string[]> Same as above but it excludes tags. -ExcludeTag <string[]>

Outputs the test result -PassThru <switch>

Runs the tests in parallel (with per-file granularity), the number allows you to specify how many tests should run in parallel, defaults to your core count. -Parallel <int>

Watch the given file paths, and if any file changes wait 2s and then run the tests in it -Watch <switch>

A single switch that combines multiple things that are used in CI builds. Run the tests in strict mode (only passed/failed results are permitted but not Skip etc.). Do NOT focus tests/blocks. Exit with exit code if any test fails. -Ci <switch>

This controls the verbosity of the on-screen output. The naming clashes with what is already there, and the goal is not to give all the options as in v4 here, because there might me more options, the output should be customizable, and because different modes (e.g. normal run, vs a parallized watch run) might need different output formatting to make sense. So say we go with the first: -Output Normal, Short, None (or -Show?)

Then Normal would show, the complete output. Short would show the failed tests that passed per block and then it would print failed tests.

Describing abc
    [-] err 1
    [-] err 2
    [-] err 3
    [+] and 100 passed tests

There is still option to add more verbosity before (like Trace, Debugging, Detailed) and in the middle, like Minimal, but let's not do that now, especially with the possibility that the output will be customizable.

Other possible namings -Output Detailed, Normal, Minimal, Quiet -Output Trace, Debugging, Normal, Short, Minimal, None

Advanced

For advanced stuff there would be functions that would generate the objects you need to provide, that way we avoid having confusing api that uses Object for parameters just so you can provide a string / string[] / scriptblock / hashtable / hashtable with aliases to a -Script parameter that is used as -Path <string> 90% of the time anyway.

So for example defining an advanced filter would be:

# New-FilterOption?
# New-PesterFilter?
New-Filter 
 -Tag <string[]>
 -ExcludeTag <string[]>
 -MustHaveAllTags -Name <string[]>
 -ExcludeName <string[]>
 -ScriptBlock <scriptblock>
 -LiteralPath <string[]>
 -ExcludeLiteralPath<string[]>
 -Position <string[]>
 -ExcludePosition<string[]>

New-Filter -AsHashtable <switch>
New-Filter -AsHashtableLiteral <switch>

Invoke-Pester -Filter <Object>

The New-Filter would produce an object that would contain all the filter parameters you specified. This has few advantages over pure hashtables:

  1. you don't have to remember what hashtable pester takes, because you get them from intellisense.
  2. it avoids parameter boom on Invoke-Pester because it does not have to be extended every time.

There might also be switches that would return the object as a hashtable / template so you can manipulate it easily by code, or write it as a literal and then splat it.

e.g. New-Filter -AsHashtableLiteral | clip

# paste it here and set what you need,
# e.g. I want all integraion tests for PowerShell Core
$filter = @{
    Tag = "Integration", "Core"
    MustHaveAllTags = $true
}

The same approach would be applied to other advanced stuff like:

It would also be a nice step towards configuration because

In between?

There is probably a huge gap between the simple interface, and the advanced interface, is there another one needed? How would it look like? How it would be implemented to not overwhelm the beginner trying to understand what to do?

Jaykul commented 5 years ago

Side note: Can you edit the original post to put code markers around things? Specifically, where you had wildcards in the docs for path and filters -- it seems that a few backslashes and wildcards are missing in the rendered output.

I like the idea of simplifying the core Pester command, even if it means we need a "Filter" or "Options" command.

I wonder if a config file, rather than (just) a live object would be ideal for helping contributors run tests the way I expect them to be run against my projects. Maybe the Filter command should be called New-PesterConfig and can return an object you can pass to a -Config parameter -- or can write a PesterOptions.psd1 file and you can pass the path to that, instead. Pester could even look for PesterOptions.psd1 in the test path you pass to it, picking it up automatically.

In any case, the hard part is getting the right "minimal" set of options.

Excluding things

I think that "Exclude" for anything but Tag should be an advanced feature. Partly just because interactive use would be so much better if there was only one parameter starting with -Ex, and partly because I can sum up the vast majority of my own use as "run all the tests in this folder except the ones that I tagged so that I could exclude them" E.g.:

Invoke-Pester -ExcludeTag SqlIntegration

For all of my other filtering, I can get by with an included folder list and/or specifying the test names.

Continuous Integration

I can't imagine running CI builds without measuring code coverage, and exporting the test results and code coverage to files so that that we can upload them to the CI system as results. Perhaps rather than a simplistic -CI switch, we need a wrapper around Pester for integration runs, so we can avoid needing to -Passthru the object and pass that to multiple export commands.

In addition to handling the source path for coverage, and the output paths for both coverate and results, a CI wrapper need to handle updating the PSModulePath for the test environment

My teams and projects are moving to using tools for dependencies. RequiredModules and PSDepend support saving required modules to a sub-folder. But during testing I need to fix that folder at the front of my PSModulePath, along with the location of my code under test. Particularly important for Pester to handle if it's going to try to parallelize or watch with runspaces/jobs.

Watching

I'm not too sure about this scenario.

In an ideal -Watch scenario (i.e. the way "continuous testing" works in Visual Studio), I want the tool to understand source control (what code has changed) and test coverage (which tests cover which code) and prioritize tests that affect stuff that's changed. Then it needs to feed the results back to the editor so it can provide continuous feedback to the user, like NCrunch or VS2017

For now, I this this should be a separate command, because it's incomplete without the test coverage correlation to edits, and will rarely be used until there is a tighter feedback loop (i.e. editor integration).

Anecdotally: we wrote a watching wrapper at work a few years ago, which guessed at which tests to run based on pending edits and test file names or tags. Despite the effort put in (and the knowledge of our code structure which made it possible to filter which tests to run) we rarely used it because the results were not visible in the editor where we were working. By the time we switched to git, nobody even bothered updating it.

One note from that experience: for PowerShell, you need to at least re-import the code under test -- otherwise the user has to unload/reload in their test fixtures.

nohwnd commented 5 years ago

Formatted.

I wonder if a config file,

Yes that would be a nice addition, and something I am playing with, but I don't want to force anyone to fiddle with config files just to be able to run some tests. So the configuration has to be an addition to an already working solution.

Excluding things

Here I was simply putting Exclude to every 'Include' that I saw as useful. I agree that excluding things might be more advanced, but it seems like a logical next step after running some tests. This set of options also gives a possibility to exclude stuff on all the three levels: by excluding a file from the run, by excluding a test without modifying the file, by adding a tag to a test and excluding it. But I see how having too many options might be confusing. I will think about it further.

Continuous Integration

I was thinking about adding code coverage to the Ci switch as well. My main issue with this is that CC is making the test run slow, and that I need to provide info where to store it. But I could also choose default names and make -ci enable code coverage, and save the cc report and nunit report to a predefined file names. like CodeCoverage.jacoco TestResults.xml (I will look if there is some standard naming that build servers pick up by default).

Sounds good?

Watching

I've made a proof of concept of this, and it seemed to work quite well as long as I edited one file and it's test file was small. Personally I'd use it, especially in combination with -Focus so it re-runs my test when I save, but I tend to write code that's living in a vacuum so it might work well for me. With more tests or tests are more expensive to run, it might not be the case so I see your point. I will move this to 5.x milestone.

Parallel

My teams and projects are moving to using tools for dependencies. RequiredModules and PSDepend support saving required modules to a sub-folder. But during testing I need to fix that folder at the front of my PSModulePath, along with the location of my code under test. Particularly important for Pester to handle if it's going to try to parallelize or watch with runspaces/jobs.

Paralelizing tests in v5 is simpler because the -PassThru is a collection of results per file (ehm. container, it could also be a scriptblock, or raw text but whatever). So combining them is easy. What I am missing is how to make it practical. Right now if every test file is independent it would work just fine. I for example favor bigger files with multiple functions related to one feature, but I see that not everybody does it the same way. So I was thinking that there would be a common setup that could be run to initialize the environment before the run. It could be on the file level, or shared for the whole run. I don't want to go into technical details.

How do you currently solve the problem that you described?

felixfbecker commented 5 years ago

Regarding PassThru, why is it not the default?

nohwnd commented 5 years ago

@felixfbecker dunno, probably because the screen output is the "default" output and passthru was added as an afterthought, so -PassThru was added as a way to make the api compatible. It would make sense to keep it the default imho. Piping the output to null is easy and it avoids situations where you pipe the result into some other cmdlet, but there is no output. I will think about it :)

felixfbecker commented 5 years ago

Cant the result be formatted with Format.ps1xml files?

nohwnd commented 5 years ago

It can be formatted, but won't be colored using the xml, right? I think the colored output was there first and then the object output was added, and PassThru was added because outputting the result object was a new feature, so that way the behavior stayed the same unless you opted-in by using the -PassThru parame.

felixfbecker commented 5 years ago

You can use ANSI escape sequences in the ps1xml files to color the output. I’ve done that for a quite a few modules and it works pretty well. They are supported in the Windows 10 terminal (the old console API is somewhat legacy).

felixfbecker commented 5 years ago

Using ANSI escape codes would probably also fix output not being colored in CI, which is very unfortunate:

Travis CI ![image](https://user-images.githubusercontent.com/10532611/58371056-3e1f0200-7f0e-11e9-9d68-44ee04a82b53.png) https://travis-ci.org/felixfbecker/PSKubectl/jobs/500233965
VSTS ![image](https://user-images.githubusercontent.com/10532611/58371067-5727b300-7f0e-11e9-99e4-474fd6b89fa7.png) https://felixfbecker.visualstudio.com/PowerGit/_build/results?buildId=179
nohwnd commented 3 years ago

@indented-automation @fflaten @johlju @glennsarti and anyone else? How do you think this should be approached? Are you missing having a separate cmdlet for each section? Or should we rather invest more into better validation and conversions in the config object itself? And possibly into a shortcut syntax e.g. @{ Run.Path = "C:\abc" } == @{ Run = @{ Path = "C:\abc" } ?

Are you missing some basic options on the simple api that would make it way better? For me it would be adding the -ScriptBlock back to Invoke-Pester, but I don't know if anyone else even uses that.

fflaten commented 3 years ago

I think the simple interface looks good, maybe with the mentioned ScriptBlock (+ the current -Container that's not mentioned).

The lack of tab-completion and especially validation in string-options in configuration is what I miss the most for stuff like Output.Verbosity etc.

The shortcut dictionary syntax looks nice for ad-hoc, but not critical for me. I mostly use hashtable when writing tests for Pester itself to keep it short.