Open gjcarneiro opened 5 years ago
I'd be happy to take patches for multiple backends, with an interface like etcd3.client(backend='asyncio')
. See also #102.
I've made considerable progress in this:
8 failed, 62 passed, 1 skipped, 4 warnings in 82.51 seconds
Still some tests to fix.
Main problems I encountered:
wait
callback cannot be async, but it needs to perform an async call (await self.etcd_client.watch_once(self.key, remaining_timeout)
). I commented out this watch_once()
call, and the tests still pass, but I have no idea why this is needed. To be honest, I feel like the Lock.acquire()
method is abusing the tenacity module, it should be rewritten without it, would make the code more readable too.@gjcarneiro great, thanks for taking this on!
Good news, I made a small benchmarking script, it this is not much slower than the native etcd benchmarking tool, written in golang. Considering how much slower Python is compared to Go, it's a good result.
(env37) 05:35:52 ~/projects$ python bench-etcd3-aio.py 10.128.41.79 1 2000
MAX_LATENCY: 0.020213561998389196
avg latency: 0.004505613304037979
With the native benchmark tool:
05:25:17 ~/projects$ benchmark --endpoints=10.128.41.79:2379 put
[...]
Summary:
Total: 39.3733 secs.
Slowest: 0.2358 secs.
Fastest: 0.0028 secs.
Average: 0.0039 secs.
Stddev: 0.0028 secs.
Requests/sec: 253.9792
I used this test program:
import asyncio
import sys
import time
import etcd3
MAX_LATENCY = 0
TOTAL_TIME = 0
WRITES = 0
async def writer(etcd, numwrites):
global MAX_LATENCY, TOTAL_TIME, WRITES
for _ in range(numwrites):
t0 = time.perf_counter()
await etcd.put("/foo", "x" * 16)
t1 = time.perf_counter()
ellapsed = t1 - t0
MAX_LATENCY = max(MAX_LATENCY, ellapsed)
TOTAL_TIME += ellapsed
WRITES += 1
async def main(argv):
host = argv[1]
numwriters = int(argv[2])
numwrites = int(argv[3])
etcd = etcd3.client(host=host, port=2379, timeout=10, backend="asyncio")
writers = [writer(etcd, numwrites) for _ in range(numwriters)]
await asyncio.gather(*writers)
await etcd.close()
print("MAX_LATENCY: ", MAX_LATENCY)
print("avg latency: ", TOTAL_TIME / WRITES)
def _run():
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main(sys.argv))
finally:
loop.close()
if __name__ == "__main__":
_run()
Any updated? The latest version does not support such backend parameter.
Any progress on this? It's been 9.5 months.
@gjcarneiro can you use anyio
instead of asyncio to support trio, curio and asyncio from one source?
Hm.. not seeing much value of anyio, to be honest, reminds me of https://xkcd.com/927/
Well not really because anyio doesn't compete with asyncio, trio or curio it allows libraries to work with any of them by defining a subset interface of all three
I'll be blunt. Honestly I just care about asyncio. Trio and curio should work with asyncio, not against it. Adding another API on top the 3 async APIs doesn't solve anything.
Any progress on this one?
Sorry, not from me. Seems like @hron has picked up from where I left off, good man! :+1:
I'm really looking forward to this. Can somebody please provide an update ? @hron
@tsaridas it's not about me to merge this PR. You can check this project: https://github.com/martyanov/aetcd3. It's based on my work.
Hello, The current pull requests use another gRPC external libs. I'm working on an asyncio compatible client. It uses the same grpc.io lib and the existing code as much as possible. I will create a pull request soon with the updated documentation with all tests.
Any thoughts on supporting asyncio? Anyone working on this?
I think we could use https://github.com/hubo1016/aiogrpc to provide an asyncio version of this module.
It doesn't necessarily have to be a fork, maybe we can create a
etcd3.aio
submodule or subpackage, containing async versions of the same interfaces.Any thoughts?