Open swistakm opened 12 years ago
You're right, it happened to me too... I'm thinking about the real purpose of this 'is_measuring' field... I'll maybe remove it... It's real purpose was to avoid to much python process running at the same time, for example if a sensor never exits its measuring process... I've never seen such cases, it's just a security hook.
Or maybe add a button somewhere to unlock 'is_measuring' on all servers.
Maybe use cache to limit rate? But it still won't help if sensor never exits...
The mechanism here should ideally be a 'queue', pushing 'sensor jobs' in it.
Pros : If a job never ends, no other job is launched, there's just more and more queued (but not launched) jobs.
Cons : Job asked to be launched (almost exactly) at 12:00 may execute at 12:10 or whatever... so sensors values timestamp could be inexact/melted.
Queue should be able to process 1 to n jobs in parallel (should be a configuration variable). I should have a look at Celery (http://celeryproject.org/) or python-rq (http://python-rq.org/), but I think it will complicate a lot my small project :-)
I think celery and pythonrq are overkill. I think it won't complicate code of skwissh because celery makes executing tasks simplier, but it would really complicate application stack needed to run skwissh :)
After some thinking I'd say it would be nice to have clery or pythonrq as optional tasks backends. It will require some new managment commads and code refactorization because now skwissh is relying on how kronos collects and installs tasks.
This makes skwissh stop collecting data for this server. Additionally there is no way to fix this except using django shell.
I suppose that when cron job is killed or exits unexpectedly nothing more can change
is_measuring
field back toFalse
. Maybe using timestamp here would be better - it will ensure that even if cron job is killed server will (in some point of time) return to it's prevoius state.