mock-server / mockserver

MockServer enables easy mocking of any system you integrate with via HTTP or HTTPS with clients written in Java, JavaScript and Ruby. MockServer also includes a proxy that introspects all proxied traffic including encrypted SSL traffic and supports Port Forwarding, Web Proxying (i.e. HTTP proxy), HTTPS Tunneling Proxying (using HTTP CONNECT) and SOCKS Proxying (i.e. dynamic port forwarding).
http://mock-server.com
Apache License 2.0
4.59k stars 1.08k forks source link

Allow replay of recordings #101

Closed belun closed 9 years ago

belun commented 9 years ago

I am not sure this feature exists or not (still investigating the code). If it exists, can you please add an example (record a request/response with Proxy and replay it using the MockServer)? (A)

If not, then how I am supposed to replay the recorded Request/Responses (I am talking about the data that comes out of ProxyClient.dumpToLogAsJava)? (B)

(I am working on this, however I am wondering if this has not already been supported) Let me say what I did so far:

Still to do:

Problems so far:

Questions:

belun commented 9 years ago

I have managed to serialize/deserialize an Expectation using ExpectationSerializer, the json part (serialize/deserialize methods) and to re-register an Expectation from Proxy (got it with ProxyClient.retrieveAsExpectations) to Server (using MockerServerClient.sendExpectation)

Yeeey!! :relieved:

However, for replay, the headers and cookies need to be cleaned (otherwise, the requests will never match). For start, I have removed them entirely :grinning:

What your opinion on this cleanup (in case I decide to make a pull-request and add the Record/Replay to the project)? Should it configurable?

jamesdbloom commented 9 years ago

To be honest I consider record-replay as an anti-pattern because it leads to very brittle and expensive to support tests.

In general a test should specify the test data (i.e. mocked requests / responses) within the test. Record-replay leads to tests sharing the test data and the test data being non obvious when looking at the test.

If the test data (mocked requests / responses) are complex to setup or require a lot of repeated steps that are common to many tests I would suggest using a test fixture class and a set of response builders. A test fixture would understand which mock requests / responses need to be setup for a specific business function being tested (i.e. user login or user registration). The test fixture can hold the data being used to mock the requests and responses, it can also auto generate appropriate data then can setup the expectation in the MockServer. This should all be done using a defined interface, for example:

A LoginTestFixture class could have a method mockSucessfulLogin() this would either accept a username and password or generate one and use these values to setup the appropriate expectation in the MockServer. The expectation would be setup by using a build class that knew how to build the request and another builder class that knew how to build the response.

The LoginTestFixture.mockSucessfulLogin() method could then either return a User object that contained the username and password to be used in the test or LoginTestFixture could expose two methods getUsername() and getPassword().

For example if the test was testing the login page

With this approach it is clear from the test setup method what the test data is, in addition the test setup is encapsulated in a class whose only responsibility is to setup the test data.

This may sound like a lot of code but generally this sort of approach leads to very simple, resilient and easy to manage tests. Basically tests that only fail when there is a genuine failing business requirement and not when the implementation changes. Basically we are trying to avoid complex and expensive tests and replace them with resilient tests and a small amount of very simple test utility classes that help in mocking the parts of the system we are not interested in testing.

This way no recorded, hard coded data is magically shared between tests. If you record and replay how do you handle the situation when the format of the mocked responses changes without having to update all your tests? What happens if one tests need to modify the recorded data? How does someone trying to fix the test in six months time know which bits of the recorded data are important and which bits are irrelevant or even why that exact data set was chosen.

What do you think?

belun commented 9 years ago

I understand somethings and do not feel the others. So, let's talk :)

But, first of all, I do not plan to record data for tests to use/reuse. I want use the recorded data for real server mocking, as in, I want a fake server. I actually want a back-end that is not faked, but stubbed, always responding the same, because a real back-end is to expensive. I do not plan to use the recorded data to test the actual server itself (as you mentioned, that is not just brittle, it is quite messed up, in my opinion; "asking for trouble" would be a nice way of describing doing that).

Anyway, that being my use-case (server stub), onto your remarks :

  1. "If you record and replay how do you handle the situation when the format of the mocked responses changes without having to update all your tests?" I do not "handle" it. I restart from scratch. At least that specific service, that changed, will have to be recorded again (by using your Proxy, while calling the new back-end; it is a "new" back-end, because I am expecting the real services to change signature of request/response overtime, as the application evolves). I plan to save recordings using a concept I called "profile" (it is actually a folder :D ), like v1.0, and later v1.1. As the application that I have to mock changes its "interface" (contract data), I make a new folder, and setup my Recorder to replay from that folder.
  2. "With this approach it is clear from the test setup method what the test data is, in addition the test setup is encapsulated in a class whose only responsibility is to setup the test data." I would totally agree with this as a strategy. I do not feel like I need it for my use-case.
  3. "A LoginTestFixture class could have a method mockSucessfulLogin()" This would never be the case, for me, as I will never try to implement a mockSucessfulLogin
  4. "What happens if one tests need to modify the recorded data?" Not going to happen. Anyone asking me to do that, I will pay for the bullets. the feature is called record/replay, not record, mess with it, replay.
  5. "How does someone trying to fix the test in six months time know which bits of the recorded data are important and which bits are irrelevant or even why that exact data set was chosen." There are no tests. There is however, recorded data. And, if it is not the kind of data that you would expect from the real version of the application (as opposed to the one being replayed by MockServer), then new recording is necessary.
  6. "This way no recorded, hard coded data is magically shared between tests." This remark, I did not even "compile", as I am hoping will not fit my use-case.

To address the issue of testing the new Recorder: Since it has 2 big features: record (the Expectation, aka response/request, and save it to file) and replay (read from file and send Expectation to server), those will be tested separately. a. The tests for recording will rely on implementation of the Serializer and I am hoping we can extract that a bit, make more injectable. This way, testing the recording can have a mocked Serializer and not care, if in time, the format or the data serialized changes. And, as a user of the Recorded, upgrade to the new version of Recorded at your own expense :D (obviously, as an user, your recorings will be invalid; maybe we can even version the recorded data, to keep backward compatibility) b. The test for replay will be a bit more tricky. We just send some once valid request/response (a recorded Expectation) to the MockServer and then make some calls and see the expected.. but, this is kind of the integration test that you already have for the MockServer. Maybe we can simplify, and just check that the data that was sent (the Expectation) arrived at the MockServer.

Thanks for taking the time to talk without bashing into my ideas. I only hope I did the same. Please feel free to jump into any points from above.

PS: I will have to upload the code I have written, soon, just so you see what I did.

jamesdbloom commented 9 years ago

I do support manual record-replay as follows:

1) setup the proxy 2) using the proxy to record the desired interactions 3) use the proxy endpoint dumpToLog?type=java 4) copy and paste the contents of the log into an instance of org.mockserver.initialize.ExpectationInitializer 5) configure you pom.xml with the mockserver-maven-plugin to use the the ExpectationInitializer instance, for example:

<plugin>
    <groupId>org.mock-server</groupId>
    <artifactId>mockserver-maven-plugin</artifactId>
    <version>3.9.2</version>
    <configuration>
        <logLevel>INFO</logLevel>
        <serverPort>8080</serverPort>
        <initializationClass>org.mockserver.MyClasspathInitializationClass</initializationClass>
        <pipeLogToConsole>true</pipeLogToConsole>
    </configuration>
</plugin>

6) use the mvn mockserver:runForked command OR add executions to start and stop MockServer as part of the build as follows:

<executions>
    <execution>
        <id>run-as-start</id>
        <phase>clean</phase>
        <goals>
            <goal>runForked</goal>
        </goals>
    </execution>
    <execution>
        <id>stop-at-end</id>
        <phase>verify</phase>
        <goals>
            <goal>stopForked</goal>
        </goals>
    </execution>
</executions>

7) now you have a fake server running on port 8080 that will replay all the requests recorded by the proxy in step 2

With this approach you have a basic record and play, but you do need to copy the java code dumped into the logs into an instance of ExpectationInitializer.

I do not want to add full record-replay as I do think this is a very bad anti-pattern for several important use cases for MockServer. I realise that your use case is not for testing and so for you this is not a problem, however, I don't want to add this feature as it will negatively affect the other use cases.