Closed skehrli closed 2 years ago
After the discussion, we planned for the following solution:
(1) Start a new measurement process in the background, remember the pid
in the JSON config.
(2) Once the containers are killed, the process should be communicated that it has to die as well.
(3) Process either writes the data to a file or it sends to sebs
process via an IPC method (pipe?).
Added a simple background thread started in /sebs/local/local.py to measure memory consumption of the current container by reading the cgroups memory.current file every 100ms. For multiple container deployments, the container id to read from is updated each time a new container is deployed. ATM, the result is not written to the ouput json file, there's only an info that the measurements have been started and the maximum memory consumption among the measurements at the end via logging.info (pass --verbose flag to see).