Closed empicano closed 5 months ago
Attention: 12 lines
in your changes are missing coverage. Please review.
Comparison is base (
60d96e1
) 84.5% compared to head (b71e2e0
) 85.1%.
Files | Patch % | Lines |
---|---|---|
aiomqtt/client.py | 77.7% | 6 Missing and 4 partials :warning: |
aiomqtt/message.py | 95.2% | 1 Missing :warning: |
aiomqtt/topic.py | 98.0% | 0 Missing and 1 partial :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Awesome work. Will take a look this weekend.
Really great to see all the effort put into this :smile: :+1: I'll review this weekend as well. :)
Great work! Everything seems good to me.
Thank you both for your valuable comments! 😊
There's another thing that I was not 100% sure about, maybe you have some ideas on that: I currently implemented the messages generator as Client.messages
attribute. We could also implement it as Client.messages()
method by accessing the current _messages()
directly rather than calling and assigning it inside __init__
.
I'm not sure if this is just a question of style or if there are other reasons for or against. I opted for Client.messages
because that makes it feel more intuitive (to me) that only one generator can be used concurrently.
What do you think?
Client.messages
makes sense for me.
Thank you both for your valuable comments! 😊
There's another thing that I was not 100% sure about, maybe you have some ideas on that: I currently implemented the messages generator as
Client.messages
attribute. We could also implement it asClient.messages()
method by accessing the current_messages()
directly rather than calling and assigning it inside__init__
.I'm not sure if this is just a question of style or if there are other reasons for or against. I opted for
Client.messages
because that makes it feel more intuitive (to me) that only one generator can be used concurrently.What do you think?
I think it is easiest for our users to have Client.messages
(as a property). However, this approach comes with some caveats. If two consumers iterate through Client.message
concurrently then:
The above may be surprising to our users. The easiest "work around" is just to write it very explicitly in the documentation. I don't think we should try to solve this routing problem. That's too difficult.
I agree that we shouldn't implement different routing models, that would probably build back the same complexity we had before :+1: Currently, something like this:
import asyncio
import aiomqtt
async def handle(client):
async for message in client.messages:
print(message.payload)
async def main():
client = aiomqtt.Client("test.mosquitto.org")
async with asyncio.TaskGroup() as tg:
async with client:
tg.create_task(handle(client))
tg.create_task(handle(client))
asyncio.run(main())
fails with RuntimeError: anext(): asynchronous generator is already running
; I think that's a good default.
Thanks for the great feedback! I'll merge and release this then 😊
This PR should implement some changes discussed in #226:
filtered_messages
andunfiltered_messages
methodsmessage_retry_set
Client argumentI'm very fond of the idea to switch out
paho
withrumqtt
, but I think that's better suited for a 3.0, otherwise this gets too big. I have too many ideas and too little time 😄Feedback is welcome 😋