Closed belun closed 9 years ago
I have managed to serialize/deserialize an Expectation using ExpectationSerializer, the json part (serialize/deserialize methods) and to re-register an Expectation from Proxy (got it with ProxyClient.retrieveAsExpectations) to Server (using MockerServerClient.sendExpectation)
Yeeey!! :relieved:
However, for replay, the headers and cookies need to be cleaned (otherwise, the requests will never match). For start, I have removed them entirely :grinning:
What your opinion on this cleanup (in case I decide to make a pull-request and add the Record/Replay to the project)? Should it configurable?
To be honest I consider record-replay as an anti-pattern because it leads to very brittle and expensive to support tests.
In general a test should specify the test data (i.e. mocked requests / responses) within the test. Record-replay leads to tests sharing the test data and the test data being non obvious when looking at the test.
If the test data (mocked requests / responses) are complex to setup or require a lot of repeated steps that are common to many tests I would suggest using a test fixture class and a set of response builders. A test fixture would understand which mock requests / responses need to be setup for a specific business function being tested (i.e. user login or user registration). The test fixture can hold the data being used to mock the requests and responses, it can also auto generate appropriate data then can setup the expectation in the MockServer. This should all be done using a defined interface, for example:
A LoginTestFixture
class could have a method mockSucessfulLogin()
this would either accept a username
and password
or generate one and use these values to setup the appropriate expectation in the MockServer. The expectation would be setup by using a build class that knew how to build the request and another builder class that knew how to build the response.
The LoginTestFixture.mockSucessfulLogin()
method could then either return a User
object that contained the username
and password
to be used in the test or LoginTestFixture
could expose two methods getUsername()
and getPassword()
.
For example if the test was testing the login page
LoginTestFixture loginTestFixture = new LoginTestFixture() User user = loginTestFixture.mockSucessfulLogin()
username
and password
contained in user
could then be passed into the appropriate input boxes on the web page. With this approach it is clear from the test setup method what the test data is, in addition the test setup is encapsulated in a class whose only responsibility is to setup the test data.
This may sound like a lot of code but generally this sort of approach leads to very simple, resilient and easy to manage tests. Basically tests that only fail when there is a genuine failing business requirement and not when the implementation changes. Basically we are trying to avoid complex and expensive tests and replace them with resilient tests and a small amount of very simple test utility classes that help in mocking the parts of the system we are not interested in testing.
This way no recorded, hard coded data is magically shared between tests. If you record and replay how do you handle the situation when the format of the mocked responses changes without having to update all your tests? What happens if one tests need to modify the recorded data? How does someone trying to fix the test in six months time know which bits of the recorded data are important and which bits are irrelevant or even why that exact data set was chosen.
What do you think?
I understand somethings and do not feel the others. So, let's talk :)
But, first of all, I do not plan to record data for tests to use/reuse. I want use the recorded data for real server mocking, as in, I want a fake server. I actually want a back-end that is not faked, but stubbed, always responding the same, because a real back-end is to expensive. I do not plan to use the recorded data to test the actual server itself (as you mentioned, that is not just brittle, it is quite messed up, in my opinion; "asking for trouble" would be a nice way of describing doing that).
Anyway, that being my use-case (server stub), onto your remarks :
To address the issue of testing the new Recorder: Since it has 2 big features: record (the Expectation, aka response/request, and save it to file) and replay (read from file and send Expectation to server), those will be tested separately. a. The tests for recording will rely on implementation of the Serializer and I am hoping we can extract that a bit, make more injectable. This way, testing the recording can have a mocked Serializer and not care, if in time, the format or the data serialized changes. And, as a user of the Recorded, upgrade to the new version of Recorded at your own expense :D (obviously, as an user, your recorings will be invalid; maybe we can even version the recorded data, to keep backward compatibility) b. The test for replay will be a bit more tricky. We just send some once valid request/response (a recorded Expectation) to the MockServer and then make some calls and see the expected.. but, this is kind of the integration test that you already have for the MockServer. Maybe we can simplify, and just check that the data that was sent (the Expectation) arrived at the MockServer.
Thanks for taking the time to talk without bashing into my ideas. I only hope I did the same. Please feel free to jump into any points from above.
PS: I will have to upload the code I have written, soon, just so you see what I did.
I do support manual record-replay as follows:
1) setup the proxy
2) using the proxy to record the desired interactions
3) use the proxy endpoint dumpToLog?type=java
4) copy and paste the contents of the log into an instance of org.mockserver.initialize.ExpectationInitializer
5) configure you pom.xml with the mockserver-maven-plugin
to use the the ExpectationInitializer instance, for example:
<plugin>
<groupId>org.mock-server</groupId>
<artifactId>mockserver-maven-plugin</artifactId>
<version>3.9.2</version>
<configuration>
<logLevel>INFO</logLevel>
<serverPort>8080</serverPort>
<initializationClass>org.mockserver.MyClasspathInitializationClass</initializationClass>
<pipeLogToConsole>true</pipeLogToConsole>
</configuration>
</plugin>
6) use the mvn mockserver:runForked
command OR add executions
to start and stop MockServer as part of the build as follows:
<executions>
<execution>
<id>run-as-start</id>
<phase>clean</phase>
<goals>
<goal>runForked</goal>
</goals>
</execution>
<execution>
<id>stop-at-end</id>
<phase>verify</phase>
<goals>
<goal>stopForked</goal>
</goals>
</execution>
</executions>
7) now you have a fake server running on port 8080 that will replay all the requests recorded by the proxy in step 2
With this approach you have a basic record and play, but you do need to copy the java code dumped into the logs into an instance of ExpectationInitializer
.
I do not want to add full record-replay as I do think this is a very bad anti-pattern for several important use cases for MockServer. I realise that your use case is not for testing and so for you this is not a problem, however, I don't want to add this feature as it will negatively affect the other use cases.
I am not sure this feature exists or not (still investigating the code). If it exists, can you please add an example (record a request/response with Proxy and replay it using the MockServer)? (A)
If not, then how I am supposed to replay the recorded Request/Responses (I am talking about the data that comes out of ProxyClient.dumpToLogAsJava)? (B)
(I am working on this, however I am wondering if this has not already been supported) Let me say what I did so far:
Still to do:
Problems so far:
new MockServerClient().when...
; I guess I have to strip this part of and domyMockServerClient.when...
Questions: