= NGINX-to-Prometheus log file exporter :tip-caption: :bulb: :note-caption: :information_source: :important-caption: :heavy_exclamation_mark: :caution-caption: :fire: :warning-caption: :warning: :toc: :toc-placement!: :toc-title:
image:https://img.shields.io/github/workflow/status/martin-helmich/prometheus-nginxlog-exporter/Compile%20&%20Test[GitHub Workflow Status] image:https://quay.io/repository/martinhelmich/prometheus-nginxlog-exporter/status[link="https://quay.io/repository/martinhelmich/prometheus-nginxlog-exporter",Docker Repository on Quay] image:https://goreportcard.com/badge/github.com/martin-helmich/prometheus-nginxlog-exporter[link="https://goreportcard.com/report/github.com/martin-helmich/prometheus-nginxlog-exporter", Go Report Card] image:https://img.shields.io/github/license/martin-helmich/prometheus-nginxlog-exporter[GitHub] image:https://img.shields.io/badge/donate-PayPal-yellow[link="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=SEARYHPVS9U5N&source=url", Donate]
Helper tool that continuously reads an NGINX log file (or any kind of similar log file) and exports metrics to https://prometheus.io/[Prometheus].
[discrete] == Contents
toc::[]
== Usage
You can either use a simple configuration, using command-line flags, or create a configuration file with a more advanced configuration.
Use the command-line:
Use the configuration file:
You can verify your config file before deployment, which will exit with shell status indicating success:
There are multiple ways to install this exporter.
=== Docker
Docker images for this exporter are available at the quay.io and pkg.github.com registries:
quay.io/martinhelmich/prometheus-nginxlog-exporter:v1
ghcr.io/martin-helmich/prometheus-nginxlog-exporter/exporter:v1
Have a look at the https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases[releases page]
to see the available versions and how to pull their images. In general, I would
recommend using the v1
tag instead of latest
.
Run the exporter as follows (adjust paths like /path/to/logs
and
/path/to/config
to your own needs):
Command-line flags and arguments can simply be appended to the docker run
command, for example to use a
configuration file:
=== DEB and RPM packages
Each https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases[release] from 1.5.1 or newer provides both DEB and RPM packages.
DEB:
$ wget https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases/download/v1.9.2/prometheus-nginxlog-exporter_1.9.2_linux_amd64.deb
$ apt install ./prometheus-nginxlog-exporter_1.9.2_linux_amd64.deb
RPM:
$ wget https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases/download/v1.9.2/prometheus-nginxlog-exporter_1.9.2_linux_amd64.rpm
$ yum localinstall prometheus-nginxlog-exporter_1.9.2_linux_amd64.rpm
The package come with a dependency on systemd and configure the exporter to be running automatically:
$ systemctl status prometheus-nginxlog-exporter
$ # systemctl disable prometheus-nginxlog-exporter
$ # systemctl enable prometheus-nginxlog-exporter
The packages drop a configuration file to /etc/prometheus-nginxlog-exporter.hcl
which you can adjust to your own needs.
If you do not want to use one of the pre-built packages, you can download the
binary itself and manually configure systemd to start it. You can find an
example unit file for this service
https://github.com/martin-helmich/prometheus-nginxlog-exporter/blob/master/res/package/prometheus-nginxlog-exporter.service[in this repository].
Simply copy the unit file to /etc/systemd/system
:
$ wget -O /etc/systemd/system/prometheus-nginxlog-exporter.service https://raw.githubusercontent.com/martin-helmich/prometheus-nginxlog-exporter/master/res/package/prometheus-nginxlog-exporter.service
$ systemctl enable prometheus-nginxlog-exporter
$ systemctl start prometheus-nginxlog-exporter
The shipped unit file expects the binary to be located in
/usr/sbin/prometheus-nginxlog-exporter
(if you sideload the exporter without
using your package manager, you might want to put it to /usr/local
, instead)
and the configuration file in /etc/prometheus-nginxlog-exporter.hcl
. Adjust
to your own needs.
If you run a logfile-generating service (be it NGINX, or anything that generates similar access log files) in Kubernetes, you can run the exporter as a sidecar along your "main" container within the same pod.
The following example shows you how to deploy the exporter as a sidecar, accepting logs from the main container via syslog:
apiVersion: v1 kind: Pod metadata: name: nginx-example annotations: prometheus.io/scrape: "true" prometheus.io/port: "4040" spec: containers:
In this example, the configuration file is passed via the exporter-config
ConfigMap. This might look like follows:
apiVersion: v1 kind: ConfigMap metadata: name: exporter-config data: config.hcl: | listen { port = 4040 }
namespace "nginx" {
source = {
syslog {
listen_address = "udp://127.0.0.1:5531"
format = "rfc3164"
}
}
format = "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" \"$http_x_forwarded_for\""
labels {
app = "default"
}
}
The config file instructs the exporter to accept log input via syslog. To
forward logs to the exporter, just instruct your main container to send its
access logs via syslog to 127.0.0.1:5531
(which works, since the main
container and the sidecar share their network namespace).
To build the exporter from source, simply build it with go get
:
$ go get github.com/martin-helmich/prometheus-nginxlog-exporter
Alternatively, clone this repository and just run go build
:
$ git clone https://github.com/martin-helmich/prometheus-nginxlog-exporter.git
$ cd prometheus-nginxlog-exporter
$ go build
== Collected metrics
This exporter collects the following metrics. This collector can listen on
multiple log files at once and publish metrics in different namespaces. Each
metric uses the labels method
(containing the HTTP request method) and
status
(containing the HTTP status code).
http_upstream_time_seconds
metric
will require your access to contain the variable $upstream_response_time
.Metrics are exported at the /metrics
path.
These metrics are exported:
|===
| <namespace>_http_response_count_total
| The total amount of processed HTTP requests/responses.
| <namespace>_http_response_size_bytes
| The total amount of transferred content in bytes.
| <namespace>_http_request_size_bytes
| The total amount of received traffic in bytes. This metrics requires the $request_length
variable in the log format.
| <namespace>_http_upstream_time_seconds
| A summary vector of the upstream response times in seconds. Logging these needs to be specifically enabled in NGINX using the $upstream_response_time
variable in the log format.
| <namespace>_http_upstream_time_seconds_hist
| Same as <namespace>_http_upstream_time_seconds
, but as a histogram vector. Also requires the $upstream_response_time
variable in the log format.
| <namespace>_http_response_time_seconds
| A summary vector of the total response times in seconds. Logging these needs to be specifically enabled in NGINX using the $request_time
variable in the log format.
| <namespace>_http_response_time_seconds_hist
| Same as <namespace>_http_response_time_seconds
, but as a histogram vector. Also requires the $request_time
variable in the log format.
|===
Additional labels can be configured in the configuration file (see below).
<namespace>
can be omitted or overridden - see <
== Configuration file
You can specify a configuration file to read at startup. The configuration file is expected to be either in https://github.com/hashicorp/hcl[HCL] or YAML format. Here's an example file:
listen { port = 4040 address = "10.1.2.3" metrics_endpoint = "/metrics" }
consul { enable = true address = "localhost:8500" datacenter = "dc1" scheme = "http" token = "" service { id = "nginx-exporter" name = "nginx-exporter" address = "192.168.3.1" tags = ["foo", "bar"] } }
namespace "app1" { format = "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" \"$http_x_forwarded_for\"" source { files = [ "/var/log/nginx/app1/access.log" ] }
print_log = false
labels { app = "application-one" environment = "production" foo = "bar" }
histogram_buckets = [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10] }
The same file as YAML file:
listen: port: 4040 address: "10.1.2.3" metrics_endpoint: "/metrics"
consul: enable: true address: "localhost:8500" datacenter: dc1 scheme: http token: "" service: id: "nginx-exporter" name: "nginx-exporter" address = "192.168.3.1" tags: ["foo", "bar"]
namespaces:
labels: app: "application-one" environment: "production" foo: "bar" histogram_buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]
For historic reasons, this exporter exports separate metrics for different namespaces (because the namespace is part of the metric name). However, in many (most) cases, it's more convenient to have the same metric name across different namespaces (with different log formats and names).
This can be done in two steps:
metrics_override
)namespace_label
)namespace "app1" { ... metrics_override = { prefix = "myprefix" } namespace_label = "vhost" ... }
prefix
can be set to ""
, resulting metrics like http_response_count_total{...}
namespace_label
can be omitted - so you have full control on metric formatSome details and history on this can be found in https://github.com/martin-helmich/prometheus-nginxlog-exporter/issues/13[issue #13].
Partial case of <
Exported metrics will have upstream_addr
and country
labels.
Currently, the exporter supports reading log data from
All log sources can be configured on a per-namespace basis using the source
property.
When reading from log files, all that is needed is a files
property:
namespace "test" {
source {
files = ["/var/log/nginx/access.log"]
// ...
}
}
The exporter can also open and listen on a Syslog port and read logs from there. Configuration works as follows:
namespace "test" { source { syslog { listen_address = "udp://127.0.0.1:8514" <1> format = "rfc3164" <2> tags = ["nginx"] <3> }
// ...