Currently, the server handles all requests synchronously (on one thread) and completes each injection before moving onto the next request. In general each request takes a total of >100ms. It isn't clear what the limiting factor is (something to investigate).
If the server could handle more incoming packets, faster, then device queues could be flushed more quickly instead of piling up until the devices start getting timeouts.
Two approaches:
1) Handle requests on individual threads or processes (Threading or Forking versions of SocketServer). I tried this but MySQL is unhappy. There may be a fix in the python MySQL API.
2) Use a queue. The incoming request would go straight to a queue and the TCP server is then available to receive a new packet. A separate thread or process handles the decryption, parsing, and injection. However, in order to return the branch and update flag to the device, the return packet has to be reworked.
Currently, the server handles all requests synchronously (on one thread) and completes each injection before moving onto the next request. In general each request takes a total of >100ms. It isn't clear what the limiting factor is (something to investigate).
If the server could handle more incoming packets, faster, then device queues could be flushed more quickly instead of piling up until the devices start getting timeouts.
Two approaches: 1) Handle requests on individual threads or processes (
Threading
orForking
versions ofSocketServer
). I tried this but MySQL is unhappy. There may be a fix in the python MySQL API. 2) Use a queue. The incoming request would go straight to a queue and the TCP server is then available to receive a new packet. A separate thread or process handles the decryption, parsing, and injection. However, in order to return the branch and update flag to the device, the return packet has to be reworked.