Open Adam-it opened 1 year ago
Thank you for the idea and the reasoning. It makes perfect sense to be able to define different mocks for the different scenarios and it wouldn't be too hard for us to make the file name configurable and default to responses.json if no name has been specified.
As for your second suggestion, I wonder if that wouldn't mean a different configuration, basically a preset.
Combining the two: what if you'd start the proxy from the working folder which has responses and a specific config (if I recall correctly we don't support project-specific config just yet)? Would that work?
yes that would work. A specific responses.json
for the test scenario and config.json
for it as well. This would also mean the --failure-rate
option will need to respect the mock responses. Currently if we mock some response it is always used does not matter what rate
we use. This makes total sense 😉 as we don't always need to mock failures 🙂. But it would also make sense that in case we mock an error response the --failure-rate
is considered 👍 (could be either taken from separate config, or passed as an option running the tool)
BTW I forgot to mention that if one of those ideas will get approved I am willing to help out and work on those.
We're tracking adding project-specific config in #62. With that, I think we've got everything covered. The new feature from this request is to be able to combine failures with mocks, rather than mocks taking precedence. Let us think about what would be the best way to support this without making the setup too hard to understand and too unpredictable. Perhaps our plugin-architecture could play a role here, where by defining the order of plugins you can define if you'd prefer mocks or chaos to take precedence.
BTW I forgot to mention that if one of those ideas will get approved I am willing to help out and work on those.
We appreciate your help. We're not accepting community contributions just yet, but we'll definitely keep you in mind for the future. ❤️
I'm very much in favor of allowing developers to set the mock responses file name/path at runtime. In fact, it's something that I was playing around with in #99
The idea of rates of mock responses is an interesting one. What about keeping it very simple and extending the schema for the mock responses file to allow for an optional rate field, in the 0-100 range, at both the file and request entry level? The top level defaults to 100 if not specified and is used unless a request provides a rate value which is used instead if specified.
This way the mocks can have a rate of usage that is independent of the failure rate for unmatched requests (assuming mocks => failures precedence)
Should rate on mock responses work only on >=400, or on all responses? I've got hard time imagining when you'd want to use a mocked 200 response but only for some requests 🤔 If we're to implement this, should we constrain it to make it more predictable to use the Proxy?
That's a very good question.
I think that constraining mock response rating to error responses makes sense.
@Adam-it I'd like to hear your thoughts on this idea, perhaps if you could share some scenarios where you want to apply rates to mock might help too.
That's a very good question.
I think that constraining mock response rating to error responses makes sense.
@Adam-it I'd like to hear your thoughts on this idea, perhaps if you could share some scenarios where you want to apply rates to mock might help too.
Sure. So the idea I had was to give the possibilty to define rates only for mocked errors (failures). Easiest would be to have it in the responses.json file as a new property that might be set when we mock an error response, but I am opened for any other way to set it. I was thinking of a functionality that would allow me to mock success response for some request and additional failure mock for the same request that could happen 20% of the time. The scenario I was thinking of was to use the tool also for QA tests which in our case are done manually by testers. We could create dedicated response.json files for dedicated test scenarios. When tester clicks over the app to perform tests according to the scenario, we may also mock a potential error that may happen in the app (for example mocking that server suddenly stopped responding and our request failed). This kind of (a bit unexpected) case that might happen during QA tests present a real life use case which would be hard to create without this kind of proxy tool (in our case the dev/test APIs are rather always working and would be hard to mock that some request may fail from time to time without explicitly adding it to the API code which would be really stupid. This kind of Proxy tool may give us this possibility and also help us to reduce resources as we would not require any API working for the QA tests as well, only good quality mocked responses 🙂). A random error (mocked) that might happen during QA tests provides possibility for the tester to check how the app reacts to failures, if it is handled properly, and gives the possibility to provide feedback what was the user experience in case of errors. I hope my description is understandable and answers your question 😉. Let me know if I may help out in any other way. I already booked some time with graph-dev-proxy also in the coming weekend.
If this idea is also interesting for you maybe I should open a separate issue so we may track it? Otherwise we may ignore it 😉.
Thanks for the additional information, @Adam-it! It helps a lot. In your scenario of QA folks testing apps for API failures, if the mocked response has only 20% chance of occurrence, what if the Proxy doesn't return an error? Would they consider the app working or would they keep rerunning the app until they see an erroneous response, defeating the idea of rates?
Thanks for the additional information, @Adam-it! It helps a lot. In your scenario of QA folks testing apps for API failures, if the mocked response has only 20% chance of occurrence, what if the Pro mexy doesn't return an error? Would they consider the app working or would they keep rerunning the app until they see an erroneous response, defeating the idea of rates?
They would consider the app working and finish the test which is also ok. Tester follow the scenario step by step. If only for some of them during the scenario they will get the error (the mocked one) they should try to resolve it with the app (trying to click on the provided retry button if there is any, or refresh or whatever). If they managed to 'resolve' the issue and continue the scenario then it is also ok and we should get the feedback that there was something unexpected but the app was prepared for it and the user may manage to retry and continue, which a very realistic case. If the fail crashed the app (worse case) then we also get the feedback but the scenario is not done. Currently we are unable to 'prepare' this kind of tests scenarios and cases. We rather test only success from begging to end of the scenario without the possibility to 'prepare' a fail. If we were suppose to also test something like this then probably we would have to have some setting or API which will always fail and have a scenario where the steps are actually to test over bug handling case. We don't do it, and expecting a bug always to happen and have it in the scenario is rather unrealistic and provides no value. Having going over scenarios that ended up with success from begging to end provide little value and confidence that the app is working. Having scenarios that randomly for some testers for some scenarios mail fail and the user may manage to resolve it or it crashes the app provides most value 😉.
Understood. Thank you for clarifying.
Background
Currently, I am researching if we may use the ms-dev-proxy as a tool to support us in integration/manual/QA tests for our apps, and mocking responses is just the perfect functionality we could use 😉.
Idea
responses.json
file in the working directory which is just perfect to keep mock related to the app in the projects folder. What I was thinking is maybe ms-dev-proxy could have a new option that would allow specifying the relative path to theresponse.json
with mocks to be used. Maybe something like-r --responses-file-path
(optional). The aim for this would be to have different test scenarios with different mocks for the same app and then use them as part of integration tests. That way I could keep something likeIn this case, each integration test could start the ms-dev-proxy giving different
responses.json
mocks as param.TBH this is very low 😉 (and probably stupid) idea as the current workaround I have for it is:
responses.json
in subfolders and run the ms-dev-proxy in the subfolder with correct mocks as working directoryresponses.json
file to be used for this test.