Closed OliverJAsh closed 3 years ago
Thanks for the detailed report! This makes sense I think, and I can see how multiple test specs makes the previous discussion more complicated and doesn't work with the setup in https://github.com/mvasin/cypress-mockttp/.
Unfortunately, making a client that connects to a running server is hard than it looks in general, because servers and clients do actually own some shared state together for some advanced Mockttp functionality. Many rules involve ongoing client-server communication for example, e.g. the callback rule (from thenCallback
) which persistently communicates with the server for each incoming request. Every time a request is received, the client is given the request details, the client runs its registered callback remotely (i.e. in the browser, for Cypress) and then the result is sent back to the server so it can actually handle the request.
You can't swap the clients connected to a running server with such a rule and get a sensible result! You can't connect to a running server that's configured with such a rule, and there are also challenges with anything that might allow multiple clients to configure rules at the same time on a single server.
That means you necessarily need some kind of startup & shutdown step between clients which enforces that there's exactly one client at any time, and I think most solutions to that will have similar challenges when used with Cypress. I also think this is good practice for tests: running a separate server for each test guarantees that the tests are independent and there's no leftover state creating unexpected test dependencies.
It's probably possible to work around this with some carefully designed alternative API, and there is almost certainly an interesting approach here which I've been meaning to investigate for some other unrelated functionality, but actually I don't think it's necessary or desirable for your case anyway.
Instead, you can do this with just a slightly different approach to Cypress hook setup. I've opened a PR against your examples with a small change to the import
branch that should solve this, and give you a independent per-suite mock server. Does that work for your case?
There is one limitation: this won't work with your code if you run tests in parallel, because they'll fight over the shared port. There are workarounds for that, but it gets significantly more complicated, and if you need that I'd highly recommend either a) using different ports for different tests or b) using the same rules for all tests, or c) running tests that depend on the same mock port in series, not in parallel.
I think there is also a possible alternative approach where you have a support file that sets a global variable (window.mockServer = getRemote()
) and you then use that from your spec files instead, but that's a bit less elegant and much less isolated.
Thanks for your response and sharing your thoughts on this. Really appreciate your time!
Unfortunately, making a client that connects to a running server is hard than it looks in general, because servers and clients do actually own some shared state together for some advanced Mockttp functionality. Many rules involve ongoing client-server communication for example, e.g. the callback rule (from
thenCallback
) which persistently communicates with the server for each incoming request. Every time a request is received, the client is given the request details, the client runs its registered callback remotely (i.e. in the browser, for Cypress) and then the result is sent back to the server so it can actually handle the request.
I wonder how MockServer solves this problem then. Their client connects to a running server and they have something similar to thenCallback
in the form of mockWithCallback
: https://www.mock-server.com/mock_server/creating_expectations.html#button_response_method_or_closure_callback
I also think this is good practice for tests: running a separate server for each test guarantees that the tests are independent and there's no leftover state creating unexpected test dependencies.
I agree in principle but in Cypress this is achieved in a slightly different way, as described in https://docs.cypress.io/guides/references/best-practices#Using-after-or-afterEach-hooks.
Instead of using after
/afterEach
hooks to stop the server and/or remove mocks that were defined previously, it's best practice (in Cypress) to always use before
/beforeEach
instead—so we reset any leftover state just before the test runs. This has a few benefits as outlined in the article. Essentially, after
/afterEach
is not guaranteed to run, e.g. if someone restarts a test when it's half way through running, so the next time we run a test it would misbehave because the server is still running or a mock has not been removed. There are numerous other benefits but I won't repeat them here as the article does a good job of describing them.
Instead, you can do this with just a slightly different approach to Cypress hook setup. I've opened a PR against your examples with a small change to the
import
branch that should solve this, and give you a independent per-suite mock server. Does that work for your case?
Thanks for your PR! That solves the problem I originally described but my next problem/question would be "how can I remove the after
hook" as per the best practices I mentioned previously?" With MockServer we were able to start the server before running our tests and then in our tests we had a global beforeEach
hook which reset all mocks that existed on the server—so there was no need for after
/afterEach
. (Cypress has a built-in request mocking tool called cy.intercept
which works in the same way as this.)
Perhaps we could stop and then start the server before each test—although I'm not sure if the client would let us call stop
if there isn't already a server started (in the first instance)?
I think there is also a possible alternative approach where you have a support file that sets a global variable (
window.mockServer = getRemote()
) and you then use that from your spec files instead, but that's a bit less elegant and much less isolated.
That could certainly work for us! I will explore this, thank you.
I wonder how MockServer solves this problem then. Their client connects to a running server and they have something similar to thenCallback in the form of mockWithCallback
I did some digging, I believe they handle this here: https://github.com/mock-server/mockserver/blob/33ce88631b4e3d09b499caccfe1f7897f9fc8f18/mockserver-core/src/main/java/org/mockserver/echo/http/WebSocketServerHandler.java#L97-L108
AFAICT, it seems that each callback is linked to the specific client that opened it, there's one websocket per callback, and when the client websocket disconnects (healthily or otherwise) then the corresponding callback rule is removed automatically, while every other rule each client registered seemingly remains active.
In future I am open to a persistent rule setup which better supports this kind of thing (where the client can connect, setup some rules, and then cleanly disconnect whilst leaving the rules in place indefinitely) because there are some cool use cases, but I don't think this is a particularly suitable model for automated testing where everything should really have a short & tightly controlled lifespan.
Instead of using after/afterEach hooks to stop the server and/or remove mocks that were defined previously, it's best practice (in Cypress) to always use before/beforeEach instead
Ah, ok, that makes sense! And seems perfectly sensible given Cypress's model.
That is a bit difficult in the current Mockttp model though, unfortunately, and this is something that could be improved... It is easy to pass clients between tests using some global variable and shutdown the old client in the before() step if necessary, so that's fine, but this doesn't actually help you with the issues described in that Cypress doc. The real problem there is cases where the entire JS process suddenly disappears (e.g. because you refresh the test page or something crashes and breaks everything).
In that case, the client will never cleanly shut down at all. Right now, if a Mockttp client silently dies without cleaning up, but the standalone server keeps running, then it's possible to end up with a dangling mock server that uses the mock port, but which misbehaves and isn't easily configurable or stoppable at all. In practice, if there's no shutdown then you'd have to restart the standalone server to clean it up. Not a disaster, and hopefully not a common occurrence, but it would be good to have an proper option to handle this.
Outside Cypress this isn't really a problem because most environments don't reset in the middle of a test suite, and you'd normally start & stop the standalone server from scratch with each test run, so it's reset implicitly at that point.
I'll try to look at that soon too. I think the best answer is probably a top-level mockttp.resetStandalone(standaloneServerUrl)
API, which can be used to remotely reset a standalone server - clearing all rules, stopping all mock servers and disconnecting any clients. That seems like a sensible API to have, to allow you to do this kind of testing with a persistent standalone server while guaranteeing that you get a clean slate every time. It's conveniently also very easy to implement. Would that work for you?
I think that all makes sense to me! I understand Cypress is slightly unique in that you actually want to keep dangling state so you can debug issues in their test runner UI.
I think the best answer is probably a top-level
mockttp.resetStandalone(standaloneServerUrl)
API, which can be used to remotely reset a standalone server - clearing all rules, stopping all mock servers and disconnecting any clients. That seems like a sensible API to have, to allow you to do this kind of testing with a persistent standalone server while guaranteeing that you get a clean slate every time. It's conveniently also very easy to implement. Would that work for you?
I'm trying to picture what this would look like in usage. If I understand correctly, we would do what you're doing in your PR, but we would replace the after
hook that stops the server with a before
hook that resets the server. Is that right?
If I understand correctly, we would still have the potential issue of dangling mocks between tests, e.g. if a test defines a mock but it's never used because the request never happened, that mock would continue to exist for the next request. How should we solve that problem? Perhaps with a global beforeEach
that calls server.reset()
?
I think with the proposed API, and extending the approach from my PR, a sensible approach would look something like:
function getMockServer() {
let server;
beforeEach(async () => {
// Try to clean up nicely, if we have a server still running & accessible
if (server) await server.stop();
// As a backup, make sure that we've always cleaned up *everything*, in case
// a server from some other page load was still running:
await mockttp.resetStandalone(standaloneUrl);
// Then start server again from scratch:
server = mockttp.getRemote(standaloneUrl);
await server.start();
});
return server;
}
That ensures everything is only cleaned up on before(), but it's all still reliably reset before every test.
It's useful to still do the manual server.stop()
where possible even though we're resetting, because although resetStandalone
would stop the server regardless, this makes sure that the client side is cleaned up nicely too (in the case where it does still exist).
There's no real downside to recreating and restarting server
like this for every test - the performance impact should be negligible in any integration testing scenario, and it keeps things simpler than trying to swap between start & reset depending on the previous test's state.
Thanks, that makes sense. We can give that a go!
Hi @OliverJAsh, this is now available too! Update to Mockttp 2.1.0 and you can now call resetStandalone()
exactly as above to shut down all mockttp servers currently managed by the standalone server.
Try to make sure you shut down any clients that might be active before calling this (as in the example above), because existing clients may behave poorly if the standalone server is reset while they're using it. Do let me know if that works for your case and if you have any more feedback.
Thank you! I'll let you know.
I'm going to close this - I think it's all resolved in Mockttp 2.1.0 - but do shout if you're still having issues.
I am trying to migrate from MockServer to mockttp and I'm having a few different issues getting mockttp to play nicely with Cypress. (I have already followed the discussion in https://github.com/httptoolkit/mockttp/issues/35 but the problems described here are different.)
Throughout this description I will refer to some branches which include test cases. The repository lives here: https://github.com/OliverJAsh/cypress-mockttp.
At the end I have a proposal to add a
useConfig
method, but first I'll describe the problem I'm having.We want to start the mock server before all of our tests run. It seems like this is something we should do in a global
before
hook. This is defined outside of adescribe
block so it will run once before all tests.(Branch:
init
)In Cypress this works fine until we want to have multiple spec files. As soon as we have multiple spec files we need to move the
before
hook somewhere else, so it runs before all tests.In Cypress it's recommended to define the global
before
in thecypress/support/index.js
, as this is guaranteed to only run once before all spec files and tests. https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests#:~:text=you%20can%20define%20behaviors%20inHowever, if we do that then the spec files will not be able to reference the existing client instance (
server
)—each spec file will have to create its own:cypress/support/index.js
:dummy.spec.ts
:(Branch:
support
)This doesn't work either because the client won't work until you call
start
:… and we can't call
start
in each spec file because the server has already been started by the client we created incypress/support/index.js
. We only want to configure the client in our spec to use the existing server that has already been started.I hoped I might be able to move the initialised client (
server
) into a module and then import it from each spec file, e.g.:server-client.js
:dummy.spec.ts
:dummy2.spec.ts
:(Branch:
import
)However, this doesn't work either, because the imported
server-client.js
module will be initialised not once but rather once for each spec file, because of the way Cypress works. This means thebefore
hook is registered twice and thus we try to start the server twice. We can see this from theconsole.log
I added:We didn't have these problems when we were using MockServer because the client worked in a slightly different way. The client wasn't stateful, nor was it responsible for starting the mock server. Rather, with MockServer you have to configure it with port of a mock server which is already running. This meant we could create a new client as many times as we like without any of the issues I described above:
This makes me think it would be good if mockttp worked in a similar way. Perhaps we could expose a new function on the client that just initialises the client with an existing
mockServerConfig
:https://github.com/httptoolkit/mockttp/blob/fc48bc4a407e34f9c61667a062710cbc66d8557c/src/client/mockttp-client.ts#L292-L302
We would need to start the mock server outside of the tests. Usage in the tests would then look something like this:
dummy.spec.ts
:I have tested the idea by patching my
node_modules
locally and it seems to solve the problem. The branchuseConfig-2
has a full reduced test case, including a patch that adds auseConfig
method. (The patch is applied when you runyarn
via thepatch-package
tool.)What do you think? Perhaps you have a better idea of how to solve this!