Closed EmperorCookie closed 3 years ago
Looks really good, thank you for implementing it cleanly! I only had a quick look though, and won't be able to check everything before tomorrow evening. I might have some questions about e.g. why you're using the thread in daemon mode, but it's your repo, so I won't hold you up merging.
I will gladly answer!
For the daemon mode, only the ThreadedSocketReader
uses it in this PR.
If you ask a program to stop, it will ask all the threads to stop and wait until they do. If the daemon mode is set, then the program can terminate without the threads finishing their work. The reason it's needed for ThreadedSocketReader
is because reading from the socket is a blocking method, therefor sending the stop signal to its thread won't do anything if no data is coming in through the socket. AccClient
's thread used to be daemon, but now there is no blocking method anymore, so it will always respond to the stop signal.
Small note, while the socket does have a timeout of 1 second set, which means the ThreadedSocketReader
would in theory read the stop signal every second, it's not guaranteed that any part of the code using this class would have that. As a result, I feel like daemon mode is needed.
Thanks for explaining, but I still don't get it. As you explained there is already a stop signal for stopping the thread. If the daemon mode is for convenience that the thread is killed even when it's forgotten I don't see any reason to have a stop method/signal at all.
Also, unregistering is triggered twice, in the _run()
methods of ThreadedSocketReader
and AccClient
. Or am I missing something here?
There is a lot going on in the ThreadedSocketReader
with all that synchronization and so on. What do you think about the following alternative solution: That class is split up into a receiver and a reader. The former runs in an own thread and just puts the packets received from the socket into a buffer. The elements/messages from this buffer are retrieved by e.g. the main thread and dispatched to the reader, which implements a read method similar to the existing one but without any need for synchronization, providing the data bytes by bytes. The already implemented receivers can then parse it using the reader as usual. Does that sound reasonable?
About the password, I doubt that any program having access to other processes' memory would bother finding configuration variables there instead of just reading the correct password from the file under Documents/AC2.
About the time fields, it might be better to just keep the integer values instead of converting them to floats with possible inaccuracies, especially since e.g. the lap time will probably be part of many comparisons/conditions in applications using this API.
That's it for now, otherwise the changes look nice. However I still didn't have a chance to test them yet.
For your first question, the stop signal exists even if the thread is in daemon mode because you may want to stop the client without quitting the main program. Doing so would leave the ThreadedSocketReader
running until the program has fully exited, especially considering the fact that UDP sockets do not require an active connection.
Second question, the AccClient
and ThreadedSocketReader
both have their own stop signal, independent from each other.
In your third question, you mentioned an alternative to the current implementation, in which a receiver would simply read packets and store them in a buffer for consumption. This is exactly what ThreadedSocketReader
does. It reads from a socket to a local buffer and makes it available to consume via the read
method. Then the "reader" is the AccClient
, which uses the read
method to consume the buffer.
About the password issue, it may not be a huge deal in this case, but it's best practice nonetheless, so I'd rather do it well if possible. If there's a security flaw, it's best that it's in someone else's code, not mine haha.
For the time fields, I actually agree with your statement, I had not thought of this when I decided to convert to seconds. I could either use Decimal
or leave it as is. The Decimal
format is not super convenient to work with, so I will revert to milliseconds, and rename the fields appropriately.
I have tested this PR and all seemed to work nicely. Thanks for your feedback!
About the alternative solution, I explained that really badly. What I meant was that the ThreadedSocketReader
is split into two classes, where the first manages the socket and reads the bytearrays from it, and then puts it in a Queue
, which can be shared among threads. So the queue contains bytearray objects, each corresponding to a single message, which are retrieved and transferred to a reader. This reader only handles a single bytearray/message at a time, since the messages are independent, and provides the current read()
method for byte-by-byte parsing. Just to clarify the idea, not a huge improvement but would make the code a lot cleaner IMO (and eliminate the need for synchronization, since once a message is retrieved and given to the reader, there is no need for synchronization since it cannot be changed anymore from the outside).
the queue contains bytearray objects, each corresponding to a single message
Unfortunately, this can't happen, because in order to know where a message begins and ends, you needs a lot of information about the message that can simply not be available to ThreadedSocketReader
.
Which part are you referring to when you say there is synchronization happening?
But of course this can happen! UDP is a datagram protocol, where you either receive the full message at once or nothing at all. This is also how my PR worked. And why I preferred the approach of giving the rest of the message to the struct classes to parse them instead of a pointer to the receiving method.
With synchronization I mean the use of a lock object (dataLock
), that you use to prevent issues when multiple threads access the same object at the same time. Since each message is only received at once in full length, there is no need to take care of such things with a split approach.
UDP itself does indeed do that. I haven't seen anywhere Python's sockets tell you that information, and as far as I knew, it would just concatenate all messages and calls to recv
would just chain all messages together.
Obviously that didn't make sense now that you pointed it out, so I went researched and indeed, the information on how Python works in this case is not found in the official documentation. Maybe it's common knowledge, but I've always worked with TCP sockets so I assumed the behavior was the same since there was nothing indicating otherwise. The only confirmation I had of this is a single
So indeed, instead of receiving bytes at a time, it can receive full messages one at a time, and your implementation now makes sense! I will see how much of the code can be simplified with that knowledge and make a PR. And hopefully not miss anything while testing it this time.
tl;dr: I didn't know datagram sockets ensured only one message was receive per call to recv
because it wasn't documented in Python.
Yeah, this is also my first time working with UDP. I guess how the sockets work is dependent on the OS and the chosen socket type, and since recv() is basically a direct syscall it's not documented in detail. Anyways I'm glad that's sorted out, I was confused all the time why you liked that receiving based approach so much 😄
All changes from other PRs (#1 and #3) have been implemented in this PR in a way that follows the base concepts introduced by the repository. I'm leaving this PR up for a day or two to allow comments before merging to
master
.Notable Changes
ThreadedSocketReader
now handles reading data from the socket in larger chunks to avoidrecv
bug with small buffersEvent
now always provides its data through thecontent
attributeAccClient.__init__
no longer requires any arguments, the relevant parameters are passed in thestart
methodwritable
attribute