This is exploration and demonstration of using a combination of mocking and non-mocking tests to ensure a well tested service. After feeling my way through this a bit, there are some practices I think feel right:
Always create a data access API wrapper around your data store (mongo in this case)
Always define interfaces for your data access API and any other API dependencies you have (other services, etc)
Generate mocks for your data access API and other dependencies
Write tests for your main service logic that use the generated mocks
Some data store query languages, like mongo, are programming languages themselves, and include embedded logic... Always write tests for your data store implementation that run against an actual data store so queries get tested.
Using testify/mock has been pretty enjoyable. I've been able to test scenarios that were preciously impossible to test without mocking the lotus client and data access API, and that feels great. Paradoxically, you really have to understand how your code being tested interacts with your mock objects in order to mock effectively (or at all really). This feels strange, because you end up thinking a lot about your service implementation details and how it interacts with the mocks, but it must be this way. Testify mocks are configured declaratively. This is good because, beyond what Testify provides for you, there is no state or logic in your mocks (the whole testing the mock conundrum). This is bad because there is a lot of repeated mock configuration throughout a test file, and it seems hard to figure out how to reuse it efficiently (this is definitely the case on this branch as you'll see. I plan on trying to clean that up).
I can say that for a service like sendfil, if I have tests set up as described here, I'd feel very confident things run as intended, and I would feel much less of a need to spin up the whole stack to do e2e testing (obviously there is a time and place for e2e testing and integration testing, just shouldn't be every time you need to test/fix something in a development workflow).
This is exploration and demonstration of using a combination of mocking and non-mocking tests to ensure a well tested service. After feeling my way through this a bit, there are some practices I think feel right:
Using
testify/mock
has been pretty enjoyable. I've been able to test scenarios that were preciously impossible to test without mocking the lotus client and data access API, and that feels great. Paradoxically, you really have to understand how your code being tested interacts with your mock objects in order to mock effectively (or at all really). This feels strange, because you end up thinking a lot about your service implementation details and how it interacts with the mocks, but it must be this way. Testify mocks are configured declaratively. This is good because, beyond what Testify provides for you, there is no state or logic in your mocks (the whole testing the mock conundrum). This is bad because there is a lot of repeated mock configuration throughout a test file, and it seems hard to figure out how to reuse it efficiently (this is definitely the case on this branch as you'll see. I plan on trying to clean that up).I can say that for a service like sendfil, if I have tests set up as described here, I'd feel very confident things run as intended, and I would feel much less of a need to spin up the whole stack to do e2e testing (obviously there is a time and place for e2e testing and integration testing, just shouldn't be every time you need to test/fix something in a development workflow).