Open chrisbillows opened 8 months ago
Thinking about the best process to do this. ~Perhaps two passes - looping over all the test files twice.~ This is a good point - this is "audit" as the word I used. So I could treat it as that.
Go through test file by test file.
Change names and unify structure:
[ ] How should I handle unwritten tests? Include empty classes? I think so, as this will highlight that they've been missed whenever I open that test file. Possibly add #TODO comment so they are all returned by a search?
This is where I'd have vision to pick up what we can consolidate in the conftest.py
. I think it's probably a bad idea to have "lots" of thinks in alwaysuse
fixtures (e.g. get requests) as it'll be less clear what is happening. Fixtures that are almost always used feel a bit more clunky. And should I ALWAYS use them? I do occasionally make real calls when running the tests - and had that time were it wasn't clear that was what was happening. Always including a fixture that I manually include even when it's not used would offer that protection and make it EXPLICIT all the time - but it's a lot of redundancy? Hmm.
Then it should be simple to impliment possibl
Consolidate patches
The approach to mocking and patching is patchwork and piecemeal. There must be presumably consolidation available?
Simplify
Can the mock_data be reduced/consolidated in any way?
Some thoughts
Start with the conftest.py and the top level strategy. What are the things that must always be patched? E.g. can we just create an always used fixture to patch
requests.get
- ensuring no test will ever accidentally make a real request? Ditto with the todoist.api methods?Split into separate issues?
conftest.py
test_main.py