mwhelan / Specify

Specify is an opinionated .Net Core testing library that builds on top of BDDfy from TestStack
http://specify-dotnet.readthedocs.org/
MIT License
20 stars 8 forks source link

Specify and Base classes #13

Open DamianReeves opened 7 years ago

DamianReeves commented 7 years ago

I want to use Specify with my Akka.Net tests. Akka tests however use a base class called TestKit so I can't use the ScenarioFor<> base classes. I did notice however you use interfaces so I tried implementing what was needed by the IScenario interface and getting things to run. I can pretty much implement the interface but the Specify() method access the Host which is internal and I can't access it without resorting to reflection. Is it possible to enable scenarios where like me, users can't directly inherit from ScenarioFor<>?

mwhelan commented 7 years ago

Hi @DamianReeves. Sorry for the delay in responding and thanks for your interest in Specify. I am open to making improvements to enable different usage scenarios. I would be interested to see a sample of what you're describing above, particularly some code I could download and play with.

I think it's worth mentioning some of the design patterns I use in my tests, as they illustrate the sorts of scenarios I'm enabling in Specify. I tend to use a layered architecture for my tests, with a specification layer and an application driver layer separating the specification of the test from the implementation.

So, in your case with Akka, I would create an AkkaDriver class to encapsulate the interaction with Akka. This becomes your System Under Test class. Because Specify supports IoC you can register it in your container and Specify will resolve it for you (along with all your application types as you would use the same IoC config your app uses). I would probably have this driver class inherit from TestKit to take advantage of its functionality, which nicely sidesteps the multiple inheritance limitation. I'd probably also add some methods to create an API for executing actors, etc with Akka that can gracefully catch exceptions and serve them back to the specification in an easy-to-use API. This is nice as you can create assertion extension methods from your AkkaDriver class such as ShouldThrowException.

For example, I write my ASP.Net MVC Core applications with Mediatr. So, I use this MediatorDriver class to encapsulate the interaction with Mediatr:

public class MediatorDriver
{
    private readonly IMediator _mediator;

    public MediatorDriver(IMediator mediator)
    {
        _mediator = mediator;
    }

    public Exception Exception { get; set; }
    public Dictionary<string,string> ValidationErrors { get; set; }

    public Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request)
    {
        try
        {
            return _mediator.SendAsync(request);
        }
        catch (ValidationException validationException)
        {
            validationException
                .Errors
                .Each(x => ValidationErrors.Add(x.PropertyName, x.ErrorMessage));
        }
        catch (Exception exception)
        {
            Exception = exception;
        }
        return null;
    }
}

This would always be used in the When method to execute the scenario. It also returns the result of the execution and stores any general exceptions and validation errors, which can be used for asserting against in the Thens or the assertion extension methods.

The MediatorDriver class would be used as the SUT for all my Mediatr tests (pseudo code - this may not compile and is not a good example of a scenario):

public class CreateValidPerson : ScenarioFor<MediatorDriver, PersonStory>
{
    private CreatePersonCommand _command;

    public void Given_valid_Person_details()
    {
        _command = Builder<CreatePersonCommand>().CreateNew()
            .With(x => x.Name = "Bob");
    }

    public void When_I_create_the_Person()
    {
        SUT.SendAsync(_command);
    }

    public void Then_the_Person_should_be_created()
    {
        using (var context = The<ApplicationDbContext>())
        {
            context.Persons.Count().Should().Be(1);
        }
    }

    public void AndThen_there_should_not_be_any_errors()
    {
        SUT.ShouldNotHaveErrors();   
    }
}

Sorry if that is a longer answer than you needed. It's kinda also the start of a blog post I've been meaning to write for awhile, but hopefully it is of some interest... :-) Like I say, I'm also interested in enabling your scenario if you want to give me a bit more info on that, but this is another way you could go.

DamianReeves commented 7 years ago

Thanks for the detailed answer. I'm going to look into trying the driver approach.

DamianReeves commented 7 years ago

So I'm not in love with trying to use the driver for this. I have to consider additional concerns with that approach. For example I was able to control whether or not I got a TestKit instance per test using normal XUnit methodologies, but with the TestDriver approach I need to consider that in my driver. The driver seemed like too high a price to pay, when this works just not through Specify:

    [Story(AsA = "As a developer",
        IWant = "I want to create a named Workspace",
        SoThat = "So that I can execute skills")]
    public class Building_a_workspace : TestKit
    {
        public Workspace SUT { get; private set; }
        public WorkspaceName WorkspaceName { get; set; }

        public void Given_a_workspace_name_of__workspaceName__(WorkspaceName workspaceName)
        {
            WorkspaceName = workspaceName;
        }

        public void When_we_build_the_workspace()
        {
            var builder = new WorkspaceBuilder()
                .UseActorSystem(Sys);
            SUT = builder.Build(WorkspaceName);
        }

        [Then("Then the workspace's name should be <workspaceName>")]
        public void Then_the_workspaces_name_should_have_the_assigned_name()
        {
            SUT.Name.Should().Be(WorkspaceName);
        }

        public void And_the_workspace_type_should_be_DefaultWorkspace()
        {
            SUT.Should().BeOfType<DefaultWorkspace>();
        }

        [Fact]
        public void Building_a_workspace_using_the_WorkspaceBuilder()
        {
            this.Given(_ => _.Given_a_workspace_name_of__workspaceName__(WorkspaceName))
                .When(_ => _.When_we_build_the_workspace())
                .Then(_ => _.Then_the_workspaces_name_should_have_the_assigned_name())
                    .And(_ => _.And_the_workspace_type_should_be_DefaultWorkspace())
                .WithExamples(new ExampleTable("Workspace Name")
                {
                    {WorkspaceName.Create()}, //Creates a new randomly generated WorkspaceName
                    {WorkspaceName.Create()}, //Creates a new randomly generated WorkspaceName
                })
                .BDDfy();
        }
    }
mwhelan commented 7 years ago

The driver would be the SUT and you would get one TestKit instance per instance of the driver. So, it's the creation of the SUT that you need to control from the test. With Specify, I would use the Setup method to create the driver, similar to how you are doing it here or, preferably, have the IoC container create it for you.

I'm also involved with BDDfy and TestStack though, so I'm glad you've found a good solution, whichever works best for you! :-)

DamianReeves commented 7 years ago

That's the problem. I need a TestKit instance per example. I started doing the following to get PerExample support:

    internal class SpecifyTestRunner : IProcessor {
        public ProcessType ProcessType
        {
            get { return ProcessType.Execute; }
        }

        public void Process(Story story)
        {
            foreach (var scenario in story.Scenarios)
            {
                var executor = new ScenarioExecutor(scenario);
                executor.InitializeScenario();

                if (scenario.Example != null)
                {
                    var unusedValue = scenario.Example.Values.FirstOrDefault(v => !v.ValueHasBeenUsed);
                    if (unusedValue != null) throw new UnusedExampleException(unusedValue);
                }

                var perExampleAction = scenario.TestObject as IPerExampleAction;
                perExampleAction?.BeforeExample(scenario.Example);

                var stepFailed = false;
                foreach (var executionStep in scenario.Steps)
                {
                    if (stepFailed && ShouldExecuteStepWhenPreviousStepFailed(executionStep))
                        break;

                    if (executor.ExecuteStep(executionStep) == Result.Passed)
                        continue;

                    if (!executionStep.Asserts)
                        break;

                    stepFailed = true;
                }

                perExampleAction?.AfterExample(scenario.Example);
            }
        }

        private static bool ShouldExecuteStepWhenPreviousStepFailed(Step executionStep)
        {
            return TestStack.BDDfy.Configuration.Configurator.Processors.TestRunner.StopExecutionOnFailingThen || !executionStep.Asserts;
        }
    }

    public interface IPerExampleAction
    {
        void BeforeExample(Example example);
        void AfterExample(Example example);
    }

But then I decided to simplify it to add the creation of the TestKit in the Setup method, and the TearDown I need in TearDown. What seems to be missing from the Container is a way of getting a scoped dependency. I could do something like register a Func<Testkit> in the container, but it felt like my test was getting really complex.

mwhelan commented 7 years ago

Examples are the hardest thing for me to marry up to BDDfy. It might be that the SUT is created once per class through Specify and that we actually need it created once per example. If that's the case, then we need to address it in Specify to match up to the behaviour in BDDfy. You shouldn't have to resort to a custom processor.

However, the simplest thing is to create the SUT in one of the BDDfy methods, such as the Given method in your previous example. That guarantees an instance per example.

If you have some sample code I could download I would be happy to work through some options with you.

DamianReeves commented 7 years ago

I'll spin up an example. But part of what I encountered is that when I decided to try and use the container, if I had Set operations, they ran per example, and if i couple that with setting the Container.SystemUnderTest then I ended up getting an exception about changing dependencies after Container.SystemUnderTest is already set. Meaning if I do container setup in Setup for example, the Setup runs for each example.

DamianReeves commented 7 years ago

So specify gives me no hook for configuring how I deal with my context for each example uniquely.

mwhelan commented 7 years ago

That's right. It's currently based on class per scenario. Examples are multiple per scenario so we might have to make some changes in Specify to accommodate that.

DamianReeves commented 7 years ago

Another helpful thing would be for the IContainer to have s registration interface which took a factory implementation delegate i.e.:

T Set<T>(Func<T> factory, string key = null) where T : class;