bluecmd / fortigate_exporter

Prometheus exporter for Fortigate firewalls
GNU General Public License v3.0
227 stars 69 forks source link

[Help - Container] Connection refuse from Prometheus target status #276

Open DevDorrejo opened 5 months ago

DevDorrejo commented 5 months ago

Hello, i have setup a rootless environment with podman (Share the code if someone need it):

#!/usr/bin/env bash
# -*- coding: utf-8 -*-

# Network container creation
if ! podman network ls | grep -q monitor; then
  podman network create monitor
fi

# Volume Management
## This will point the mount to the folder and store the info there
folders=(
  "grafana/data"
  "prometheus/data"
  "prometheus/config"
  "prometheus/fortigate"
)
paths="${PWD}/volume"
for d in "${folders[@]}"; do
  if [ ! -d "${paths}/${d}" ]; then
    mkdir -p "${paths}/${d}"
  fi
  podman volume create \
    -o type=none \
    -o device="${paths}/${d}" \
    -o=o=bind \
    "${d%%/*}-${d#*/}"
done

# Credentials
PASS=$(openssl rand -base64 12) && echo "$PASS"
# mkdir -p "${PWD}"/.enc && echo "${PASS}" >.enc/pass && podman secret create "$(echo "${PASS}" | openssl enc -e -a -base64 | sed 's/[^a-zA-Z0-9]*$//')" .enc/pass
mkdir -p "${PWD}"/.enc && echo "${PASS}" >.enc/pass && podman secret create pass-secret .enc/pass

# Pod Creation
podman pod create \
  --replace \
  --restart unless-stopped \
  -h "Hostname" \
  -n "podName" \
  --network "monitor" \
  --infra-name "InfraName (Optional)" \
  -p 127.0.0.1:3000:3000 \
  -p 127.0.0.1:9090:9090

# Grafana
podman run -d \
  --replace \
  --pull=always \
  --label "io.containers.autoupdate=registry" \
  --user "$(id -u)" \
  --name grafana \
  --pod "PodName" \
  --secret pass-secret \
  -v grafana-data:/var/lib/grafana:U \
  -e "GF_SECURITY_ADMIN_PASSWORD__FILE=/run/secrets/pass-secret" \
  -e "GF_DEFAULT_INSTANCE_NAME=EYES" \
  -e "GF_SERVER_ENABLE_GZIP=true" \
  -e "GF_FEATURE_TOGGLES_ENABLE=publicDashboards" \
  -e "GF_INSTALL_PLUGINS=grafana-clock-panel, citilogics-geoloop-panel, gowee-traceroutemap-panel, alexanderzobnin-zabbix-app" \
  -e "GF_LOG_MODE=console file" \
  docker.io/grafana/grafana-enterprise

# Prometheus setup

## Prometheus
podman run -d \
  --replace \
  --pull=always \
  --label "io.containers.autoupdate=image" \
  --name prometheus \
  --pod "PodName" \
  -v prometheus-data:/prometheus:U \
  -v prometheus-config:/etc/prometheus:U \
  docker.io/prom/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/prometheus \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.console.templates=/etc/prometheus/consoles \
  --web.enable-lifecycle

## prometheus.yml
# Prometheus file configuration

cat >>"${paths}"/"${folders[2]}"/prometheus.yml <<EOF
# Prometheus.YML version 1
global:
  scrape_interval: 15s
  evaluation_interval: 30s
  scrape_timeout: 10s

scrape_configs:
  # The job name is added as a label \`job=<job_name>\` to any timeseries scraped from this config.
  - job_name: "prometheus"
    scrape_interval: 5s
    static_configs:
      - targets: ["localhost:9090"]

  # Example
  - job_name: "Example"
    scrape_interval: 5s
    static_configs:
      - targets: ["node_exporter:9100"]
        labels:
          group: "Example"

# Alertmanager configuration
#alerting:
#  alertmanagers:
#    - static_configs:
#    - targets:
  # - alertmanager:9093

EOF

## Node_export (For openSUSE host)
zypper in -y podman systemd-container
firewall-cmd --permanent --add-port=9100/tcp && firewall-cmd --reload

set -- nodeProm
useradd -c "$1 Manager User" -md /opt/"$1" -U "$1"
loginctl enable-linger "$1"
machinectl shell "$1"@

cp -R /usr/share/containers .config/
sed -i '0,/"journald"/s,,"k8s-file",' "${HOME}"/.config/containers/containers.conf

podman run -d \
  --replace \
  --pull=newer \
  --label "io.containers.autoupdate=registry" \
  --name node_exporter \
  --cap-add=SYS_TIME \
  --pid="host" \
  --net="host" \
  -v "/:/rootfs:ro,rslave" \
  -v "/proc:/host/proc:ro,rslave" \
  -v "/sys:/host/sys:ro,rslave" \
  docker.io/prom/node-exporter \
  --path.procfs=/host/proc \
  --path.rootfs=/rootfs \
  --path.sysfs=/host/sys \
  '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'

# Fortigate Exporter
podman run -d \
  --replace \
  --pull=newer \
  --label "io.containers.autoupdate=registry" \
  --name foti_exporter \
  --pid="host" \
  --net="monitor" \
  -v prometheus-fortigate:/config \
  -p 9710:9710 \
  quay.io/bluecmd/fortigate_exporter -insecure -auth-file="/config/fortigate-key.yaml"

Here is the Fortigate Exporter:

podman run -d \
  --replace \
  --pull=newer \
  --label "io.containers.autoupdate=registry" \
  --name foti_exporter \
  --pid="host" \
  --net="monitor" \
  -v prometheus-fortigate:/config \
  -p 9710:9710 \
  quay.io/bluecmd/fortigate_exporter -insecure -auth-file="/config/fortigate-key.yaml"

fortigate-key.yaml:

 "https://10.0.0.2":
     token: TokenAPI
     probes:
         exclude:
             - Wifi                                                                                                                                                                                                                               
             - Firewall/LoadBalance

Prometheus:

# Prometheus.YML version 1
 global:
   scrape_interval: 15s
   evaluation_interval: 30s
   scrape_timeout: 10s

 scrape_configs:
   - job_name: "FortigateExporter"
     metrics_path: /probe
     scrape_interval: 5s
     static_configs:
         - targets:
             - "https://10.0.0.2"
     relabel_configs:
         - source_labels: [__address__]
           target_label: __param_target
         - source_labels: [__param_target]
           target_label: instance
           regex: '(?:.+)(?::\/\/)([^:]*).*'
         - target_label: __address__
           replacement: '127.0.0.1:9710'

but when i go to "prometheus:9090/targets?search=FortigateExporter, it said connection refuse to the fortigate: image

But in the grafana server, where fortigate_exporter container is, I can connect:

curl -I -X GET http://127.0.0.1:9710/probe?target=https://10.0.0.2

return: HTTP1.1 200 OK

Now, this cause:

  1. grafana won't appear in the promQL the metrics.
  2. Prometheus get Connection refuse from 10.0.0.2, when the server can reach it.

So, what can be I missing to fail the implementation?

DevDorrejo commented 5 months ago

Futher tests:

http://127.0.0.1:9710/probe?target=https://10.0.0.2: Error: API connectivity test failed, Response code was 401, expected 200 (path: "api/v2/monitor/system/status")

jseifeddine commented 4 months ago

Futher tests:

http://127.0.0.1:9710/probe?target=https://10.0.0.2: Error: API connectivity test failed, Response code was 401, expected 200 (path: "api/v2/monitor/system/status")

Prometheus doesn't talk to your Fortigate device, neither does Grafana...

fortigate_exporter talks to the API on Fortigate, and exposes the metrics to be scraped (by Prometheus) on its HTTP server... at port 9710 (default)

Regarding your errors

401 = Unauthorized

Either bad token, or you don't have right permissions for the token, set on your Fortigate device... 10.0.0.2 ?

So follow the bouncing ball... start with making sure you can connect to the API from wherever fortigate_exporter is running run:

curl -X GET -I -k https://< fortigate-device-address >/api/v2/monitor/system/status/?access_token=< auth-token >

Replace < fortigate-device-address > and <auth-token>

Bad token example:

HTTP/1.1 401 Unauthorized
Date: Fri, 09 Feb 2024 15:35:25 GMT
X-Frame-Options: SAMEORIGIN
Content-Security-Policy: frame-ancestors 'self'
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=15552000
Content-Length: 503
Content-Type: text/html; charset=iso-8859-1

Good token example:

HTTP/1.1 200 OK
Date: Fri, 09 Feb 2024 15:32:45 GMT
X-Frame-Options: SAMEORIGIN
Content-Security-Policy: frame-ancestors 'self'
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=15552000
Cache-Control: no-cache, must-revalidate
ETag: omitted
Content-Length: 25112
Content-Type: application/json

Once you've fixed that issue, move on to the next - you've got a few :)

You are running containers as per your post above? if thats the case, please understand that you can't use 127.0.0.1 from within one container to talk to the host mapped port on another container... you must use it's container hostname/ip

So in the bottom of your prometheus.yml - where you specify fix it to show the name of the container...

        replacement: 'foti_exporter:9710'

Moving on to the next problem:

Below you specify the voume mount: -v prometheus-fortigate:/config

But where is your fortigate-key.yml file ? Make sure it's in the prometheus-fortigate volume root...

podman run -d \
  --replace \
  --pull=newer \
  --label "io.containers.autoupdate=registry" \
  --name foti_exporter \
  --pid="host" \
  --net="monitor" \
  -v prometheus-fortigate:/config \
  -p 9710:9710 \
  quay.io/bluecmd/fortigate_exporter -insecure -auth-file="/config/fortigate-key.yaml"

Anyway i just lost the enthusiasm as you've made a custom script that needs to be debugged and properly tested.... hopefully the clues / hints above help you

Also i don't know podman, i'm assuming most of the switches / parameters match that of docker - if that is the case one more thing you need to look at:

-v prometheus-fortigate:/config

because you're not prepending the prometheus-fortigate with ./ - it means you're trying to mount a volume named prometheus-fortigate

That volume needs to be created which i saw you handling in your script... but usually (on docker) that volume root directory on the host machine is created automatically and stored in the docker service's data directory,

if you wanted to do a simple volume mount of the dir without that extra bit of detail, just do:

-v ./prometheus-fortigate:/config

that way the folder ./prometheus-fortigate (in your current pwd) is the folder mounted to /config in the container, so put your fortigate-key.yml in ./prometheus-fortigate folder

Anyway, thats it from me

Good luck

jseifeddine commented 4 months ago

Actally seems like you may have the key file mounted correctly because without it you'd get:

probe: no API authentication registered for "https://10.0.0.2"

So skip those checks,

just do the curl check to the fortigate firewall from the fortigate_exporter's host machine, making sure you get a 200 response first,

then fix the issue with your prometheus.yml pointing to 127.0.0.1 - it should be pointing to the container ip/hostname