Closed CaptainMack closed 5 years ago
@CaptainMack I'm using self.current_timestamp = int(time.time()) and time.time()
is timezone independent, because it's unix epoch.
According to CachetHQ API documentation, the API /metrics/.../points
has the timestamp as optional. I could definitely remove the parameter and let CachetHQ determine the timestamp automatically, the only drawback is that the measurement won't be as accurate. It would be off by the delay of this script to get the data accepted by CachetHQ.
I think a good compromise here would be to make this configurable. How does it sound?
Hello @mtakaki,
Thanks for replying so quickly. I solved the problem at hand by mounting the /etc/timezone, when running the docker image. I agree that it would nice to make it configurable, so that it follows the optional nature of the API.
However, my graph does not seem to update the latency, it just keeps defaulting to 0 (which is my standard setting for the metric). Do you, by any chance, know what could be causing this issue?
All the best,
Not a problem. I'll look into making it configurable soon.
About your issue with the latency, are you seeing these log messages? https://github.com/mtakaki/cachet-url-monitor/blob/master/cachet_url_monitor/configuration.py#L267 If so, are they above 0?
Yes,
I am getting messages such as Metric uploaded: 0.233957 seconds
with latency configured with latency_unit: ms
Our statuspage for reference: https://status.montem.io/
Interesting... I haven't seen this happening before. Lemme give it a try here locally.
I just tried it here and it seems to be working fine. Which version of cachet are you using? I'm getting these in the logs:
INFO [2019-02-17 19:31:54,586] cachet_url_monitor.configuration.Configuration - Metric uploaded: 0.150658 seconds
INFO [2019-02-17 19:32:24,789] cachet_url_monitor.configuration.Configuration - Metric uploaded: 0.130219 seconds
INFO [2019-02-17 19:32:54,999] cachet_url_monitor.configuration.Configuration - Metric uploaded: 0.117776 seconds
INFO [2019-02-17 19:33:25,169] cachet_url_monitor.configuration.Configuration - Metric uploaded: 0.110160 seconds
INFO [2019-02-17 19:33:55,350] cachet_url_monitor.configuration.Configuration - Metric uploaded: 0.110824 seconds
And this is how the graph look like:
Thank you for testing this @mtakaki,
I am running 2.4.0 due to using PHP7.2 on our servers, which is the only version that runs on the newer PHP version. Could this be source of the problem? I can move the issue to Cachet to hear them out, meanwhile I'll do some manual testing with the Cachet API to see If it doesn't register at all.
I believe that could be the issue. I tested this with 2.3.15 and it worked fine. I did see these errors when I tried with the latest version:
Postgres:
postgres_1 | ERROR: relation "metric_points" does not exist at character 90
postgres_1 | STATEMENT: select avg(metric_points.value * metric_points.counter) as value FROM chq_metrics m JOIN metric_points ON metric_points.metric_id = m.id WHERE m.id = $1 AND to_char(metric_points.created_at, 'YYYYMMDDHH24MI') = $2 GROUP BY to_char(metric_points.created_at, 'HHMI')
Cachet:
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "NOTICE: PHP message: [2019-02-18 03:23:08] development.ERROR: PDOException: SQLSTATE[42P01]: Undefined table: 7 ERROR: relation "metric_points" does not exist"
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "LINE 1: ..._points.counter) as value FROM chq_metrics m JOIN metric_poi..."
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: " ^ in /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:335"
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "Stack trace:"
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "#0 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(335): PDOStatement->execute(Array)"
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "#1 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(706): Illuminate\Database\Connection->Illuminate\Database\{closure}(Object(Illuminate\Database\PostgresConnection), 'select avg(metr...', Array)"
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "#2 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(669): Illuminate\Database\Connection->runQueryCallback('select avg(metr...', Array, Object(Closure))"
cachet_1 | [18-Feb-2019 03:23:08] WARNING: [pool www] child 138 said into stderr: "#3 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(342): Illuminate\Da..."
cachet_1 | 2019/02/18 03:23:08 [error] 123#123: *37 FastCGI sent in stderr: "PHP message: [2019-02-18 03:23:08] development.ERROR: PDOException: SQLSTATE[42P01]: Undefined table: 7 ERROR: relation "metric_points" does not exist
cachet_1 | LINE 1: ..._points.counter) as value FROM chq_metrics m JOIN metric_poi...
cachet_1 | ^ in /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:335
cachet_1 | Stack trace:
cachet_1 | #0 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(335): PDOStatement->execute(Array)
cachet_1 | #1 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(706): Illuminate\Database\Connection->Illuminate\Database\{closure}(Object(Illuminate\Database\PostgresConnection), 'select avg(metr...', Array)
cachet_1 | #2 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(669): Illuminate\Database\Connection->runQueryCallback('select avg(metr...', Array, Object(Closure))
cachet_1 | #3 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(342): Illuminate\Database\C" while reading response header from upstream, client: 172.21.0.1, server: localhost, request: "GET /metrics/1?filter=last_hour HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost", referrer: "http://localhost/"
Do you agree we can close this ticket?
Yes @mtakaki I am sorry about the delay in replying. The error is at Cachet's API end. Thank you for helping debugging this!
All the best, Christian
No worries! Thanks for conforming it. Feel free to reopen it if you think this is on my end :)
Hi!
I've setup the monitor so it measures latency and everything works as a charm, however! I've tried setting the timezone on my ubuntu server with
sudo dpkg-reconfigure tzdata
But Cachet still reports the latency reports an hour earlier than it should be.
I am currently using the Docker image, when running the monitor.