This adds a metrics endpoint for the app on port 9153 using promster.
Why? When working with prometheus, it's sometimes easier to have a small app you can scrape from where the data is consistent.
This PR adds a very basic prometheus server to the application, exposing metrics server on port 9153 that can be requested: curl -s localhost:9153.
It may be out of scope from the original intent of this repo, but I've found these changes useful and can imagine others might as well.
Example output:
$ curl -s localhost:80/test
{
"path": "/test",
"headers": {
"host": "localhost",
"user-agent": "curl/7.64.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "localhost",
"ip": "::ffff:127.0.0.1",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "wileyj"
},
"connection": {}
}
$ curl -s localhost:9153
# HELP process_cpu_user_seconds_total Total user CPU time spent in seconds.
# TYPE process_cpu_user_seconds_total counter
process_cpu_user_seconds_total 0.12645199999999998 1630078381013
# HELP process_cpu_system_seconds_total Total system CPU time spent in seconds.
# TYPE process_cpu_system_seconds_total counter
process_cpu_system_seconds_total 0.017154 1630078381013
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.14360599999999998 1630078381013
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1630078330
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 35831808 1630078381013
# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.
# TYPE nodejs_eventloop_lag_seconds gauge
nodejs_eventloop_lag_seconds 0.000600628 1630078381014
# HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.
# TYPE nodejs_active_handles gauge
nodejs_active_handles{type="WriteStream"} 2 1630078381013
nodejs_active_handles{type="ReadStream"} 1 1630078381013
nodejs_active_handles{type="Server"} 3 1630078381013
# HELP nodejs_active_handles_total Total number of active handles.
# TYPE nodejs_active_handles_total gauge
nodejs_active_handles_total 6 1630078381013
# HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.
# TYPE nodejs_active_requests gauge
# HELP nodejs_active_requests_total Total number of active requests.
# TYPE nodejs_active_requests_total gauge
nodejs_active_requests_total 0 1630078381013
# HELP nodejs_heap_size_total_bytes Process heap size from node.js in bytes.
# TYPE nodejs_heap_size_total_bytes gauge
nodejs_heap_size_total_bytes 10665984 1630078381013
# HELP nodejs_heap_size_used_bytes Process heap size used from node.js in bytes.
# TYPE nodejs_heap_size_used_bytes gauge
nodejs_heap_size_used_bytes 9565760 1630078381013
# HELP nodejs_external_memory_bytes Nodejs external memory size in bytes.
# TYPE nodejs_external_memory_bytes gauge
nodejs_external_memory_bytes 945022 1630078381013
# HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes.
# TYPE nodejs_heap_space_size_total_bytes gauge
nodejs_heap_space_size_total_bytes{space="read_only"} 163840 1630078381013
nodejs_heap_space_size_total_bytes{space="new"} 1048576 1630078381013
nodejs_heap_space_size_total_bytes{space="old"} 7942144 1630078381013
nodejs_heap_space_size_total_bytes{space="code"} 364544 1630078381013
nodejs_heap_space_size_total_bytes{space="map"} 794624 1630078381013
nodejs_heap_space_size_total_bytes{space="large_object"} 270336 1630078381013
nodejs_heap_space_size_total_bytes{space="code_large_object"} 81920 1630078381013
nodejs_heap_space_size_total_bytes{space="new_large_object"} 0 1630078381013
# HELP nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes.
# TYPE nodejs_heap_space_size_used_bytes gauge
nodejs_heap_space_size_used_bytes{space="read_only"} 156992 1630078381013
nodejs_heap_space_size_used_bytes{space="new"} 715952 1630078381013
nodejs_heap_space_size_used_bytes{space="old"} 7567216 1630078381013
nodejs_heap_space_size_used_bytes{space="code"} 327072 1630078381013
nodejs_heap_space_size_used_bytes{space="map"} 534824 1630078381013
nodejs_heap_space_size_used_bytes{space="large_object"} 262160 1630078381013
nodejs_heap_space_size_used_bytes{space="code_large_object"} 3840 1630078381013
nodejs_heap_space_size_used_bytes{space="new_large_object"} 0 1630078381013
# HELP nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes.
# TYPE nodejs_heap_space_size_available_bytes gauge
nodejs_heap_space_size_available_bytes{space="read_only"} 0 1630078381013
nodejs_heap_space_size_available_bytes{space="new"} 315120 1630078381013
nodejs_heap_space_size_available_bytes{space="old"} 227216 1630078381013
nodejs_heap_space_size_available_bytes{space="code"} 4704 1630078381013
nodejs_heap_space_size_available_bytes{space="map"} 242296 1630078381013
nodejs_heap_space_size_available_bytes{space="large_object"} 0 1630078381013
nodejs_heap_space_size_available_bytes{space="code_large_object"} 0 1630078381013
nodejs_heap_space_size_available_bytes{space="new_large_object"} 1031072 1630078381013
# HELP nodejs_version_info Node.js version info.
# TYPE nodejs_version_info gauge
nodejs_version_info{version="v15.14.0",major="15",minor="14",patch="0"} 1
# HELP up 1 = up, 0 = not up
# TYPE up gauge
up 0
# HELP nodejs_gc_runs_total Count of total garbage collections.
# TYPE nodejs_gc_runs_total counter
nodejs_gc_runs_total{gc_type="scavenge"} 1
nodejs_gc_runs_total{gc_type="incremental_marking"} 4
nodejs_gc_runs_total{gc_type="mark_sweep_compact"} 2
# HELP nodejs_gc_pause_seconds_total Time spent in GC Pause in seconds.
# TYPE nodejs_gc_pause_seconds_total counter
nodejs_gc_pause_seconds_total{gc_type="scavenge"} 0.002521724
nodejs_gc_pause_seconds_total{gc_type="incremental_marking"} 0.0009807589999999999
nodejs_gc_pause_seconds_total{gc_type="mark_sweep_compact"} 0.014777158
# HELP nodejs_gc_reclaimed_bytes_total Total number of bytes reclaimed by GC.
# TYPE nodejs_gc_reclaimed_bytes_total counter
nodejs_gc_reclaimed_bytes_total{gc_type="scavenge"} 2334184
nodejs_gc_reclaimed_bytes_total{gc_type="mark_sweep_compact"} 2117608
# HELP http_request_duration_seconds The HTTP request latencies in seconds.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{le="0.05",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="0.1",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="0.3",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="0.5",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="0.8",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="1",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="1.5",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="2",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="3",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="10",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="+Inf",method="get",status_code="200",path="/"} 2
http_request_duration_seconds_sum{method="get",status_code="200",path="/"} 0.016163167
http_request_duration_seconds_count{method="get",status_code="200",path="/"} 2
http_request_duration_seconds_bucket{le="0.05",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="0.1",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="0.3",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="0.5",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="0.8",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="1",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="1.5",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="2",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="3",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="10",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_bucket{le="+Inf",method="get",status_code="200",path="/test"} 1
http_request_duration_seconds_sum{method="get",status_code="200",path="/test"} 0.003469653
http_request_duration_seconds_count{method="get",status_code="200",path="/test"} 1
# HELP http_requests_total The total HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="get",status_code="200",path="/"} 2
http_requests_total{method="get",status_code="200",path="/test"} 1
Apologies for late reply. Sorry and as you guessed, this is out of scope or intention of this tool, and it adds another layer of maintenance/overhead for me.
This adds a metrics endpoint for the app on port
9153
using promster. Why? When working with prometheus, it's sometimes easier to have a small app you can scrape from where the data is consistent.This PR adds a very basic prometheus server to the application, exposing metrics server on port
9153
that can be requested:curl -s localhost:9153
.It may be out of scope from the original intent of this repo, but I've found these changes useful and can imagine others might as well.
Example output: