Closed Andrew-Chen-Wang closed 2 years ago
I've been using this for a week or two (just running off a commit hash from that time period). It would be nice to be able to push my project up to pypi, but I can't because pypi wont let you use a direct dependency. Have you considered releasing a 2.0.0a preview version (or something like that)?
Yes we've considered it, but I don't have release permission only seandstewart, but @abrookins might? @abrookins I suggest we push out the alpha version. It won't show up as an upgrade to people but it at least makes the installation easier to do.
You can see the list of those with access here, there are only 3
Hey @asvetlov - you're on the pypi account, any chance you could give access to the two Andrews so we can continue release work while the Sean is AFK? He made them admins on the repo, I doubt he'd be opposed to having them have pypi access
@bmoscon -
thanks for stepping in with your vote of confidence. I’ve been afk for the last ten days because my father passed away and I’m helping my mother with everything. I commented on the Maintainership ticket letting folks know.
https://github.com/aio-libs/aioredis-py/issues/822#issuecomment-808465684
@Andrew-Chen-Wang the GitHub action for automatic deployment from a tag is live in master so should only require a tagged commit on master - note I haven’t tested the action so it may require tweaking to get it right. If you’re unable to publish an alpha package via the action, let me know and I’ll step online to to get one published to pypi.
@seandstewart wow sorry to hear that! Take care of yourself/your family!
I'll be more available to help out in May, but in the meantime I agree @Andrew-Chen-Wang, let's push an alpha 2.0 release. Pip should prevent people from installing an alpha unless they ask for it IIRC. Sounds like we need to tag 2.0.0a1 but also change the version in master to use the alpha signifier. I can work on that.
@Andrew-Chen-Wang Can you review? Should be short. :D https://github.com/aio-libs/aioredis-py/pull/934
alas, it looks like its hung. Hopefully there is an easy way for it to retry. . . .
Our first 2.0.0 alpha release is now available on PyPI: https://pypi.org/project/aioredis/2.0.0a1/
I'm already using it :D
Hi guys, is there any release timeline for aioredis 2.0 ? the alpha works great so far, but using an alpha release on my main software trunk line is not the best thing to consider as an alpha version may experience major changes. Is it the documentation that delays the release?
@rafaelSorel I would say ETA middle of June at worst, optimistically maybe beginning of May when abrookins takes over or middle of April when seandsteward returns and there isn't anything major.
TL;DR the following portion: I feel I need a second/third opinion before making the final huge leap to 2.0.0 and would need the RTD to be updated first.
Hi, I was just pointed to this issue when discussing candidates for the first AnyIO enabled redis client. Would you be willing to consider letting me base this project on it? I would pitch in with packaging and testing setup too, as I consider those my strong suits.
@rafaelSorel We don't have a release timeline yet, but we'll make sure to update this issue when we do. This release needs more testing in live environments, and we also need access to the docs site.
I've gone over the code and found some dangerous code: __del__
calls code async code even though it could be triggered in any thread.
Is there a gitter room / discord server / irc channel you guys collaborate on? This would potentially go forward quicker than just posting messages here now and then.
Ah, that Gitter room must be "aio-libs" which I have already joined, right?
and we also need access to the docs site.
FWIW latest
gets updates on pushes to master
: https://readthedocs.org/projects/aioredis/builds/
and we also need access to the docs site.
FWIW
latest
gets updates on pushes tomaster
: https://readthedocs.org/projects/aioredis/builds/
Yes, but if you look those docs are empty because we switched to mkdocs from Sphinx and it’s still configured for Sphinx.
and we also need access to the docs site.
@seandstewart , you're now a maintainer
Is there going to be a call for testers at some point soon? I have an open source project using aioredis that's currently pinned to Python 3.7/Redis 5 but looking to update. Was wondering if there are specific stacks you're looking for feedback on.
It might be only semantics but would prefer using something with a beta label to alpha.
Stoked you're so close to release!
and we also need access to the docs site.
@seandstewart , you're now a maintainer
@popravich thanks for that - the new documentation is now live: https://aioredis.readthedocs.io/en/latest/
Ah, that Gitter room must be "aio-libs" which I have already joined, right?
Hi @agronholm - that Gitter room seems pretty quiet. I'm new on this project and not sure if there's a proper real-time chat somewhere, but I hang out in the Redis discord server: http://discord.gg/redis
I don't think the server has client-specific rooms, but we have language-specific rooms like Python, etc.
Hi! Just wanted to report a few issues I faced recently. Just to be clear, I'm a new user of aioredis and in-fact, Redis itself. I think the issues I faced are mostly due to the inconsistencies that have arisen due to this massive change.
I think either the docs should be switched back to the one which has guide for 1.3.1 OR the tag should be updated (along with adding the instructions on how to install the latest release) 🤷♂️ And btw, my new tiny project in which I'm using this alpha version: shawarmaKoders/Hedwig :)
@Vibhu-Agarwal Thanks for the report and good job getting it worked out. Fixed in #970
I tried 2.0.0a1 for a new project and it seems to work fine so far. Just a minor nitpick: the scan() function gets a "_type" parameter, which I would have expected to be called "type_" (as suggested by PEP-8).
Hi folks, I wanted to post an update here about what I've been doing. Recently, I rewrote a production service to use aioredis-py with FastAPI. The migration went fairly well, and I haven't noticed any problems.
@Andrew-Chen-Wang We have docs access now IIRC. I feel pretty good about releasing 2.0 in some form. What do you think?
Looks like I still need to follow up on that from_url
issue you pinged me in. I'll take a look today.
@abrookins I agree; nothing too major with all the bug fixes so far. Even with the from_url issue, it seems like that only happens if the initial connection fails. I'm comfortable with a formal release!
So, trying this again: is there any interest in using structured concurrency with this library, or adding trio support? Both of which could be achieved at the same time by porting to AnyIO?
Hi @agronholm, thanks for following up! I prefer to keep this library as basic as possible, so while I appreciate you asking, I'm not interested. @Andrew-Chen-Wang can weigh in with his thoughts if he has anything to add or feels differently.
Hi folks, I wanted to post an update here about what I've been doing. Recently, I rewrote a production service to use aioredis-py with FastAPI. The migration went fairly well, and I haven't noticed any problems.
@Andrew-Chen-Wang We have docs access now IIRC. I feel pretty good about releasing 2.0 in some form. What do you think?
Looks like I still need to follow up on that
from_url
issue you pinged me in. I'll take a look today.
This is great news 📰.
I’m very happy to see this library is moving forward as I’ve had to step away to spend time with my family. I foresee more availability in June and look forward to helping out as I can as you take over maintenance @abrookins
@agronholm I agree with abrookins. There's only so much time and effort we can put in; plus, I'm not actually that acquainted with Trio (which itself is not very mature last I checked). We discussed having an "extension" module, but I think the AnyIO implementation would just be a complete rewrite, so a fork is the best response I can give.
I've gone over the code and found some dangerous code:
__del__
calls code async code even though it could be triggered in any thread.Is there a gitter room / discord server / irc channel you guys collaborate on? This would potentially go forward quicker than just posting messages here now and then.
Better late than never, but do you mind pointing to an example bit of code? I'm pretty sure all those disconnect calls always use get_event_loop()
, and I don't think we actually support putting in your own loop anymore? In other words, I think all those __del__
calls are safely in the right context...
@Andrew-Chen-Wang I've been meaning to look into this more. The test suite in the FastAPI app I manage that's running 2.0 complains about our failure to await disconnect()
:
test_1 | tests/test_validators.py::test_skip_404_allows_non_404
test_1 | /usr/local/lib/python3.8/asyncio/base_events.py:641: RuntimeWarning: coroutine 'Connection.disconnect' was never awaited
test_1 | Coroutine created at (most recent call last)
test_1 | File "/usr/local/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
test_1 | self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
test_1 | File "/usr/local/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
test_1 | res = hook_impl.function(*args)
test_1 | File "/usr/local/lib/python3.8/site-packages/_pytest/fixtures.py", line 1117, in pytest_fixture_setup
test_1 | result = call_fixture_func(fixturefunc, request, kwargs)
test_1 | File "/usr/local/lib/python3.8/site-packages/_pytest/fixtures.py", line 924, in call_fixture_func
test_1 | fixture_result = fixturefunc(**kwargs)
test_1 | File "/usr/local/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 160, in wrapper
test_1 | return loop.run_until_complete(setup())
test_1 | File "/usr/local/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete
test_1 | self.run_forever()
test_1 | File "/usr/local/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
test_1 | self._run_once()
test_1 | File "/usr/local/lib/python3.8/asyncio/base_events.py", line 1851, in _run_once
test_1 | handle._run()
test_1 | File "/usr/local/lib/python3.8/asyncio/events.py", line 81, in _run
test_1 | self._context.run(self._callback, *self._args)
test_1 | File "/usr/local/lib/python3.8/site-packages/aioredis/connection.py", line 631, in __del__
test_1 | coro = self.disconnect()
test_1 | self._ready.clear()
test_1 |
test_1 | -- Docs: https://docs.pytest.org/en/stable/warnings.html
@abrookins It may be due to loop.create_task(coro)
in the __del__
method. Try replacing that call with asyncio.run_coroutine_threadsafe(coro, loop)
I realise I'm chiming in rather late, but has any consideration been given to renaming the library rather than calling this version 2 under the same "aioredis" name? From a quick look it seems like the old API has been thrown away and replaced, and reusing the name could make migration more difficult because Python doesn't allow two versions of the same package name to coexist in a process or environment.
@bmerry you can read the conversation from #891. 1.X.X hasn't been touched in two years and no one bothered to really continue its complexity. Thus, I urge you to try migrating to 2.0.0. Thanks!
I have to agree with @agronholm that Connection.__del__
looks extremely dangerous:
asyncio.get_event_loop()
, which will return the event loop of the thread running the finalizer, which might not be the same thread that the connection was originally used with.call_soon_threadsafe
.In my opinion redis-py is misguided in trying to use __del__
to close down connections. If __del__
is getting called on open connections, then the user has chosen to let the garbage collector sort things out rather than managing the process, and so one might as well let it the garbage collector do the rest of the work too.
I've spotted an issue that might cause some issues with migration from 1.x. In 1.x, you could make as many concurrent calls as you liked, and if necessary they would block until the connection pool had a free connection, even if you had a bound on the number of connections. With 2.0.0a1, trying to do too much concurrency results in an aioredis.exceptions.ConnectionError ("Too many connections").
The default in redis-py (and hence in aioredis) is 2**31 connections, which should be enough for anybody. But if someone is migrating code from 1.x with an explicit upper bound (which might be needed because 1.x defaulted to 10) they might get a nasty shock in production the first time the load is high enough to actually need all the connections and there is an exception.
I don't know what the best solution is, given the goal of following redis-py as closely as possible. Maybe it just merits a mention in the migration document that max_connections
has this behaviour which is different to aioredis 1.3 and that it probably shouldn't be used (aiojobs
may be an alternative to restrict the number of connections).
@bmerry, have you looked at using the BlockingConnectionPool
@erewok Your posted link goes to some Mulligancloud? Probably you had another link on the clipboard.
@bmerry, have you looked at using the
BlockingConnectionPool
Thanks, that's exactly the solution! (although the docstring possibly needs rewriting since it claims to be thread-safe whereas in aioredis it isn't, it is async-safe). Probably worth a mention in the migration documentation for anyone who has used a bounded connection pool in aioredis 1.x.
@erewok Your posted link goes to some Mulligancloud? Probably you had another link on the clipboard.
Yeah. Bad copy pasta. Sorry about that.
@bmerry, can you update the link in your comment to this: https://github.com/aio-libs/aioredis-py/blob/c54fca02e216f0fb046b83e6403315fb00cd43e6/aioredis/connection.py#L1439
Not a huge deal, but could certainly confuse people in the future...
Is this an appropriate place to ask about the BlockingConnectionPool
(I can start a new issue if you want)?
I have been trying to use BlockingConnectionPool
from v2.0.0.a1
in places where ConnectionPool
is working fine and I keep hitting an exception on disconnect
:
File "/location/of/dependencies/lib/python3.9/site-packages/aioredis/connection.py", line 1596, in disconnect
async with self._lock:
AttributeError: 'BlockingConnectionPool' object has no attribute '_lock'
This is in this chunk of code: https://github.com/aio-libs/aioredis-py/blob/c54fca02e216f0fb046b83e6403315fb00cd43e6/aioredis/connection.py#L1598
It looks like the direct reason for this is that on __init__
a regular ConnectionPool
ends up calling reset
, which sets the following attributes:
self._lock = asyncio.Lock()
self._created_connections = 0
self._available_connections = []
self._in_use_connections = set()
However, the BlockingConnectionPool
does something different in its reset
method: https://github.com/aio-libs/aioredis-py/blob/c54fca02e216f0fb046b83e6403315fb00cd43e6/aioredis/connection.py#L1491
I am probably going to use ConnectionPool
for now: I think the BlockingConnectionPool
isn't totally usable yet.
@erewok probably would be better as an issue now that seandstewart closed most 1.3.1 issues.
@abrookins @seandstewart do you think it's worth it do add or update the CI matrix to use Redis 6.2? Lots of bug fixes are coming. Might be worth doing another pre-release instead of 2.0.0 for now.
@bmerry I'd also would like to bump this comment: https://github.com/aio-libs/aioredis-py/issues/930#issuecomment-857540732 I believe we have a migration section/page in the docs that we can mention this.
Is that possible to set timeout in redis url or through parameter of from_url
?
Hello, any ETA on 2.0.0 release?
@Bautista-Baiocchi-lora I want to get a second pre-release out with all our bug fixes on master + #1045 #1048 #1049 and #1051 merged by the end of this week. @seandstewart @abrookins is that possible and do you think we should instead just do a 2.0.0 release instead?
Hi all,
After #891, aioredis will be making a major jump to
aioredis==2.0.0
. The PR made aioredis an enormous async port of redis-py. For the migration docs, please refer to: https://github.com/aio-libs/aioredis-py/blob/master/docs/migration.md. To install:Old pip install instructions for beta left for posterity
`pip install aioredis==2.0.0b1` To install aioredis at a specific commit: `$ pip install git+git://github.com/aio-libs/aioredis-py.git@ eb4c001a598c01d9a0e5f2a24065c486c823d1b0 ` To install at master: `pip install git+git://github.com/aio-libs/aioredis-py.git@master`We'd love to hear how well this new version of aioredis integrates with your current production code! @abrookins and I will be updating this specific GitHub issue with any updates that you should be aware of; any breakages can be reported as a separate GitHub issue or in this one, too.
Thanks everyone!