requests-cache / aiohttp-client-cache

An async persistent cache for aiohttp requests
MIT License
118 stars 20 forks source link

Add asyncio.Lock mutex to _request #226

Closed rudcode closed 7 months ago

rudcode commented 7 months ago

With this change if there is simultaneous request to the same url at the same time, it will only fetch the url once and the rest is feeded from cache

Open for any feedback

codecov[bot] commented 7 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 97.26%. Comparing base (ac512b8) to head (79bbefe). Report is 6 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #226 +/- ## ========================================== + Coverage 97.22% 97.26% +0.03% ========================================== Files 10 10 Lines 1010 1022 +12 Branches 171 173 +2 ========================================== + Hits 982 994 +12 Misses 18 18 Partials 10 10 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

alessio-locatelli commented 7 months ago

@rudcode Thank you for the Pull Request. Could you please rebase and resolve merge conflicts?

I think that the code works and I have no remarks, but it would be nice to add a test because a non-working locking will not fail CI. Something like:

# 1. You need to mock `Backend.write()`.

# 2. Then call the same URL concurrently.
await asyncio.gather(*[session.get('http://localhost:8080/') for _ in range(10)])

# 3. Assert that `write()` was called only once.
mocked_bachend_write.assert_called_once()

Technically I could merge after adding a test, but I will wait for Jordan Cook's approval in case he decides to reject the PR or request changes.

rudcode commented 7 months ago

Hi @olk-m, okay, I'll rebase and try to add the test for this change. Thank you for the feedback, really appreciate it.

JWCook commented 7 months ago

Looks like this auto-closed because your fork's main is now the same as this repo's main, but feel free to reopen this or create a new PR.

I'm fine with adding a lock for identical requests, as long as it doesn't impact concurrent performance and there are tests added for the new behavior.

JWCook commented 7 months ago

@olk-m Is this basically the same approach you were planning to take for #903?

alessio-locatelli commented 7 months ago

@olk-m Is this basically the same approach you were planning to take for #903?

Yes, although I did not have an implementation idea, this functionality is something I was thinking of adding in the future.