Open NodeJSmith opened 2 months ago
@abmantis I went ahead and finished this up - the diff makes it look like a lot of changes but is all the same tests, just using aioresponses which lets us write them a lot cleaner and clearer. I also made sure to keep all the same checks, based on your feedback on the last PR. The one exception to that is this check in test_appliancesmanager:
if account_id is None:
m.assert_any_call(URL(BACKEND_SELECTOR_MOCK.user_details_url), "GET")
else:
assert (
"GET",
URL(BACKEND_SELECTOR_MOCK.user_details_url),
) not in m.requests
This one just checks that we do call user_details_url
if we don't have an account id, instead of explicitly checking if it's the first call. The reason for this is that aioresponses doesn't give us a good way (that I could find at least) to determine if this was called first, as it stores the calls in a dictionary that is keyed by method and url (e.g. ("POST", "url_value.url.com")).
If you'd like me to keep looking for a way to assert that it's first let me know, I'm positive that it can be done, I just didn't want to put in a ton of time to do so if we are comfortable with this level of check.
The tests test_auth_multiple_client_credentials
and test_auth_bad_credentials
also got slimmed down some, because we were providing data to the mock that actually wasn't required for the test assertions, so I removed that data.
The other "big" changes were moving the json data to data files instead of putting it in python, which just feels cleaner, and adding washer and dryer data files - there are no tests using them yet, but I wanted to get them in there for future tests.
Oh and I also updated the tests GH action to test again 3.11 and 3.12 since that is what we are supporting now
@abmantis If you need me to break this up into smaller PRs I can. I wasn't sure how to do so with this, since it's all contained to tests, but if you give me some guidance on what would be easiest to digest I can work with that. I know it's a lot to look at in one go
Hello! When I started adding the tests using aiohttp I did have some trouble and the result isn't great, I agree 😅 At the time I looked at aioresponses, but for some reason I were not able to use it, and ended up doing something similar to what HA does (or did at the time).
I haven't had the chance to review this properly yet, but from a brief look I don't think there's a need to split it. Thanks for the concern!
@abmantis So I created some mocks to add to conftest.py and I noticed that Auth
actually contains everything we need for all of our logic - it contains the backend selector, the auth credentials, and the session. We currently pass all of these to everything, but I think if we just accessed them from Auth it would work just as well.
Fixtures:
@pytest.fixture
def aioresponses_mock():
with aioresponses() as m:
yield m
@pytest.fixture(params=[BackendSelectorMock, BackendSelectorMockMultipleCreds])
def backend_selector_mock(request):
yield request.param()
@pytest_asyncio.fixture
async def auth_mock(backend_selector_mock):
session = aiohttp.ClientSession()
auth = Auth(backend_selector_mock, "email", "secretpass", session)
yield auth
await session.close()
Example - we previously had a backend selector mock we provided, plus the auth mock, plus the session. The new auth_mock
fixture actually has all of these, so when we instantiate the Oven in the below test we can just use them all from the Auth
object.
What are your thoughts on this? I don't want to pile up too many changes, but I'm not sure how to approach the tests. Does the below work for our tests or should I specifically separate out all of these pieces, even if it means more work for mocking the calls?
async def test_attributes(auth_mock, aioresponses_mock):
oven = Oven(auth_mock._backend_selector, auth_mock, SAID, auth_mock._session)
Oh and aioresponses doesn't actually work as a decorator, which is why I made a fixture, I found a similar issue here: https://github.com/pnuckowski/aioresponses/issues/218
Regarding the Auth and mock thing, I would prefer to keep the fixtures instead of getting them from Auth, to keep responsibilities separated and also avoid accessing what should be internal stuff from Auth.
I am also wondering about an alternative: could/should we make Aircon/Oven/etc fixtures? That way, the tests only needs to receive that fixture as argument.
In a general sense are you okay with all of these classes receiving instances of the same classes (backend selector, Auth, session) or should we add a review of the architecture to the backlog at some point? It seems like it could be a code smell.
I'm okay with keeping the fixtures separate, I'll break them back out.
Regarding the appliance fixtures, I think we can do that. I think they all use the same SAID, and we set the data using aioresponses, so that should be feasible. I'll take a look at it in the next day or two.
I actually like the fact that they are separated and each class receives the instances they need. I usually avoid classes that "agregate" things, as they tend to become dependency magnets. But I may be missing what you had in mind, so please do let me know if you have any thoughts on how to improve the arch.
Whew, that took me longer than I'd planned.
I believe I implemented all of the changes you requested. I also took your advice and created mocks for the appliances, as well as mocks for the AppliancesManager and a fixture for the aiohttp.ClientSession
with a session
scope so we have the one for the whole duration of the tests and it closes itself. The tests look even cleaner now that they get everything through fixtures.
Two items I want to point out to you:
I added the with patch("whirlpool.appliance.Appliance._create_headers", return_value={}):
decorator in the appliance mocks, so that the headers were taken care of. I confirmed that all tests still pass as expected, so I think this is okay, but if you want me to do something differently here I can. My other idea was to overwrite the method with a patched method but I thought that decorating the whole fixture was probably cleaner until we have a reason to do something different.
In test_appliancesmanager I added an autouse fixture, for the aioresponses calls that are required for all of the tests. Let me know if you have concerns with this - I wanted to make the tests look a bit cleaner but if you have concerns with an autouse fixture I can revert this. I also split out a lot of the assertions in this file into separate tests to make things more structured.
@pytest.fixture(autouse=True)
def account_id_calls_fixture(aioresponses_mock, backend_selector_mock):
"""Mock the calls to get the user details and owned appliances.
This is required by all of the tests in this file, so we use the autouse=True
parameter to ensure that this fixture is always run.
"""
aioresponses_mock.get(
backend_selector_mock.user_details_url, payload={"accountId": ACCOUNT_ID}
)
aioresponses_mock.get(
backend_selector_mock.get_owned_appliances_url(ACCOUNT_ID),
payload={ACCOUNT_ID: owned_appliance_data},
)
aioresponses_mock.get(
backend_selector_mock.shared_appliances_url, payload=shared_appliance_data
)
yield
@abmantis I moved _get_account_id
from AppliancesManager
to Auth
, as it seems to make more sense there. The way it was previously, we could theoretically find that the account id is not in the Auth
's _auth_dict
attribute, get the value using a call, and leave that empty in the Auth
class, which doesn't make any sense.
Let me know if the changes look okay to you. I also removed the test_fetch_appliances_returns_true_if_either_method_returns_true
test as it basically just tested boolean logic, which was silly.
I also updated the shared/owned tests to use the aioresponses assert method - this does require calling the private _create_headers
method, but that's the only way to use the assert methods, and I think it's cleaner than the way I was doing it previously.
Oh and I fixed the cli command option, it was missing await
s on the inputs.
Let me know if there are any concerns with the changes.
@abmantis Wanted to get your thoughts on this before I start working on the other tests files. The existing fixtures and test setup has always been very confusing for me and I think aioresponses can make everything a lot simpler.
Let me know if you have any preferences or concerns on what I have so far and I can incorporate that before moving on to the other test files.
Note: This branch is off of feature/move_urls_to_backend_selector so you'll see those commits as well until that PR is approved, it's the
test_aircon.py
file that I'm focusing on here.