Open Lalufu opened 2 years ago
Hi! Thanks for issue and feedback. Wasn't aware that sharing data between processes will increase CPU usage this much. Every day you can learn something new :)
You are correct, we can move blacklist check to e.g. https://github.com/ttu/ruuvitag-sensor/blob/1939518aeaee51ae06bcb447b17d213e1158499f/ruuvitag_sensor/adapters/bleson.py#L122
For stopping I can't come up with a solution now. If I remember correctly before stopping the process BleCommunicationBleson.stop(observer)
needs to be called or Bleson wasn't killed/cleaned up correctly. Can't really remember the reason. Can't remember either what was the reason why bleson communication is handled in an own process.
Should have commented these decision to code...
Would you like to make a PR for blacklist check and then for the stopping if you come up with a better solution?
Describe the bug This is half a bug report, and half a question on how to deal with it.
As bluez is getting slowly deprecated, I tried to move one of my systems using ruuvitag-sensors through the HCI adapter to bleson. This is running on a Raspi 3B+.
While this worked I noticed that the bleson backend uses significantly more CPU, instead of a process using a few percent of CPU (according to top) I now had two processes in the 15 to 25% range.
This is in an environment where the system sees between 10 and 20 BLE advertisements per second.
Some debugging the problem seems to be in the way the bleson adapter is set up.
ruuvitag-sensors
is using themultiprocessing
module to spawn a new process that handles the actual communication with thebleson
module. It also sets up a shared data structure to communicate between the main process and the subprocess. This shared data structure is the problem.For each message received via BLE, the code checks if it should still run at all (looking at
shared_data["stop"]
) and at the current blacklist (shared_data["blacklist"]
). These look like just dictionary accesses, but are in fact causing cross-process communication via UNIX sockets. Each access establishes a socket, goes through authentication, fetches the data, and closes the socket.Just commenting these accesses out (knowlingly breaking some functionality) gets the CPU usage down to what is comparable with the HCI backend.
While I appreciate trying to deal with blacklisting as close to the source as possible, I'm leaning towards ripping out this whole thing, and handling blacklisting back in the core entirely.
For the
shared_data["stop"]
handling I'm sure a better solution can also be found.Thoughts?
Environment (please complete the following information):