peimanja / artifactory_exporter

JFrog Artifactory Prometheus Exporter written in Go
Apache License 2.0
141 stars 37 forks source link

`/api/v1/metrics` #57

Closed pcj closed 2 years ago

pcj commented 2 years ago

If the artifactory system.yaml has metrics enabled, this provides additional metrics for prometheus. One could scrape the metrics off that directly, but it is still behind basic auth, so I am thinking this exporter might be able to reverse proxy that. WDYT? Thinking about creating a PR

pcj commented 2 years ago

These are the metrics:

curl -u 'admin:password' http://localhost:8081/artifactory/api/v1/metrics
Handling connection for 8081
# HELP jfrt_http_connections_available_total Available Connections
# UPDATED jfrt_http_connections_available_total 1634692390406
# TYPE jfrt_http_connections_available_total counter
jfrt_http_connections_available_total{max="50",pool="pypi-remote"} 1 1634692390407
# HELP jfrt_http_connections_leased_total Leased Connections
# UPDATED jfrt_http_connections_leased_total 1634692390406
# TYPE jfrt_http_connections_leased_total counter
jfrt_http_connections_leased_total{max="50",pool="pypi-remote"} 0 1634692390407
# HELP jfrt_http_connections_pending_total Pending Connections
# UPDATED jfrt_http_connections_pending_total 1634692390406
# TYPE jfrt_http_connections_pending_total counter
jfrt_http_connections_pending_total{max="50",pool="pypi-remote"} 0 1634692390407
# HELP jfrt_http_connections_max_total Max Connections
# UPDATED jfrt_http_connections_max_total 1634692390406
# TYPE jfrt_http_connections_max_total counter
jfrt_http_connections_max_total{max="50",pool="pypi-remote"} 50 1634692390407
# HELP jfrt_runtime_heap_freememory_bytes Free Memory
# UPDATED jfrt_runtime_heap_freememory_bytes 1634692390406
# TYPE jfrt_runtime_heap_freememory_bytes gauge
jfrt_runtime_heap_freememory_bytes 3.143830e+09 1634692390407
# HELP jfrt_runtime_heap_maxmemory_bytes Max Memory
# UPDATED jfrt_runtime_heap_maxmemory_bytes 1634692390406
# TYPE jfrt_runtime_heap_maxmemory_bytes gauge
jfrt_runtime_heap_maxmemory_bytes 8.589935e+09 1634692390407
# HELP jfrt_runtime_heap_totalmemory_bytes Total Memory
# UPDATED jfrt_runtime_heap_totalmemory_bytes 1634692390406
# TYPE jfrt_runtime_heap_totalmemory_bytes gauge
jfrt_runtime_heap_totalmemory_bytes 6.442451e+09 1634692390407
# HELP jfrt_runtime_heap_processors_total Available Processors
# UPDATED jfrt_runtime_heap_processors_total 1634692390406
# TYPE jfrt_runtime_heap_processors_total counter
jfrt_runtime_heap_processors_total 8 1634692390407
# HELP jfrt_db_connections_active_total Total Active Connections
# UPDATED jfrt_db_connections_active_total 1634692387076
# TYPE jfrt_db_connections_active_total gauge
jfrt_db_connections_active_total 5 1634692390407
# HELP jfrt_db_connections_idle_total Total Idle Connections
# UPDATED jfrt_db_connections_idle_total 1634692387076
# TYPE jfrt_db_connections_idle_total gauge
jfrt_db_connections_idle_total 25 1634692390407
# HELP jfrt_db_connections_max_active_total Total Max Active Connections
# UPDATED jfrt_db_connections_max_active_total 1634692387076
# TYPE jfrt_db_connections_max_active_total gauge
jfrt_db_connections_max_active_total 80 1634692390407
# HELP jfrt_db_connections_min_idle_total Total Min Idle Connections
# UPDATED jfrt_db_connections_min_idle_total 1634692387076
# TYPE jfrt_db_connections_min_idle_total gauge
jfrt_db_connections_min_idle_total 1 1634692390407
# HELP sys_cpu_totaltime_seconds Total cpu time from the threads in seconds
# UPDATED sys_cpu_totaltime_seconds 1634692390406
# TYPE sys_cpu_totaltime_seconds gauge
sys_cpu_totaltime_seconds 753.207 1634692390407
# HELP jfrt_storage_current_total_size_bytes Used Storage
# UPDATED jfrt_storage_current_total_size_bytes 1634692390406
# TYPE jfrt_storage_current_total_size_bytes gauge
jfrt_storage_current_total_size_bytes 0.000000e+00 1634692390407
# HELP app_disk_total_bytes Used bytes for app home directory disk device
# UPDATED app_disk_total_bytes 1634692390405
# TYPE app_disk_total_bytes gauge
app_disk_total_bytes 2.103045e+11 1634692390407
# HELP app_disk_free_bytes Free bytes for app home directory disk device
# UPDATED app_disk_free_bytes 1634692390405
# TYPE app_disk_free_bytes gauge
app_disk_free_bytes 1.596263e+11 1634692390407
# HELP jfrt_artifacts_gc_next_run_seconds Next GC Run
# UPDATED jfrt_artifacts_gc_next_run_seconds 1634690826953
# TYPE jfrt_artifacts_gc_next_run_seconds gauge
peimanja commented 2 years ago

@pcj I like the idea, just making it optional and default to false should be good. I would be glad to look at the PR 👍

pcj commented 2 years ago

So far we are OK with the metrics we have, not planning a PR just yet. I suppose if we have some instability we'd like to add this.

pcj commented 2 years ago

I ended up just creating an nginx proxy for the basic auth. Here's what that looks like:

events {
  worker_connections  4096
}
http {
    server { 
        listen 80;
        location /metrics {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://your-artifactory-endpoint:80/artifactory/api/v1/metrics;
            proxy_set_header Authorization "Basic {echo -n 'admin:password' | base64}";
        }
    }
}
kubectl -n artifactory create configmap artifactory-prometheus-metrics-proxy-nginx-conf --from-file=nginx.conf
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: artifactory-api-metrics-proxy
  name: artifactory-api-metrics-proxy
  namespace: artifactory
spec:
  replicas: 1
  selector:
    matchLabels:
      app: artifactory-api-metrics-proxy
  template:
    metadata:
      labels:
        app: artifactory-api-metrics-proxy
      annotations:
        prometheus.io/path: "/metrics"
        prometheus.io/scrape: "true"
        prometheus.io/port: "80"
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        command: ["nginx", "-g", "daemon off;"]
        args: ["-c", "/etc/nginx/custom/nginx.conf"]
        ports:
          - containerPort: 80
            name: http          
        volumeMounts:
        - mountPath: /etc/nginx/custom
          name: nginx-config
          readOnly: true
      volumes:
      - name: nginx-config
        configMap:
          name: artifactory-prometheus-metrics-proxy-nginx-conf