Open madelaney opened 4 years ago
Hello @madelaney, I have not found much documentation on this, but this is my supersimple configuration made today, that works for me :)
shared yaml (common.yaml for me) Hiera :
prometheus::node_exporter::export_scrape_job: true
prometheus::apache_exporter::export_scrape_job: true
prometheus::collect_scrape_jobs:
- job_name: node
- job_name: apache
Node definition for Prometheus server (I want to clean this up and move to Hiera later) :
class { 'prometheus::server':
alerts => {
'groups' => [
{
'name' => 'alert.rules',
'rules' => [
{
'alert' => 'InstanceDown',
'expr' => 'up == 0',
'for' => '5m',
'labels' => {
'severity' => 'page',
},
'annotations' => {
'summary' => 'Instance {{ $labels.instance }} down',
'description' => '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.'
}
},
],
},
],
},
scrape_configs => [
{
'job_name' => 'prometheus',
'scrape_interval' => '10s',
'scrape_timeout' => '10s',
'static_configs' => [
{
'targets' => [ 'localhost:9090' ],
'labels' => {
'alias' => 'Prometheus',
}
}
],
},
],
}
Then I just include prometheus::node_exporter
and prometheus::apache_exporter
where I need them.
Just basic example, hope it helps!
I am currently trying to wrap my head around this too
I can provider bit of update here. In my global Hiera config (common.yaml) I have:
lookup_options:
prometheus::blackbox_exporter::modules:
merge:
strategy: deep
merge_hash_arrays: true
prometheus::node_exporter::collectors_enable:
merge:
strategy: deep
merge_hash_arrays: true
prometheus::storage_retention: 90d
prometheus::daemon::export_scrape_job: true
prometheus::node_exporter::export_scrape_job: true
prometheus::node_exporter::collectors_enable:
- systemd
- processes
prometheus::apache_exporter::export_scrape_job: true
prometheus::blackbox_exporter::export_scrape_job: true
prometheus::elasticsearch_exporter::export_scrape_job: true
prometheus::haproxy_exporter::export_scrape_job: true
prometheus::collect_scrape_jobs:
- job_name: apache
- job_name: elasticsearch
- job_name: haproxy
- job_name: mysql
- job_name: node
- job_name: postgres
- job_name: postfix
- job_name: process
- job_name: puppetdb
- job_name: redis
prometheus::mysqld_exporter::export_scrape_job: true
prometheus::mysqld_exporter::cnf_socket: /var/run/mysqld/mysqld.sock
prometheus::mysqld_exporter::cnf_user: root
prometheus::node_exporter::collectors_enable:
- mountstats
- ntp
- processes
- systemd
prometheus::postgres_exporter::export_scrape_job: true
prometheus::postfix_exporter::export_scrape_job: true
prometheus::process_exporter::export_scrape_job: true
prometheus::puppetdb_exporter::export_scrape_job: true
prometheus::redis_exporter::export_scrape_job: true
Prometheus server:
classes:
- grafana
- prometheus::blackbox_exporter
- prometheus::node_exporter
grafana::provisioning_datasources:
apiVersion: 1
datasources:
- name: 'Prometheus'
type: 'prometheus'
access: 'proxy'
url: 'http://localhost:9090'
isDefault: true
grafana::provisioning_dashboards:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: true
options:
path: '/var/lib/grafana/dashboards'
puppetsource: 'puppet:///modules/arteal/grafana_dash'
prometheus::manage_prometheus_server: true
prometheus::postfix_exporter::manage_user: false
prometheus::postfix_exporter::user: postfix
Then host with Redis and Apache looks like:
classes:
- apache
- prometheus::apache_exporter
- prometheus::redis_exporter
- redis
And I use default node manifest like:
node "default" {
hiera_include(classes)
}
This works perfectly Let me know if I can help somehow else
Part of what I am trying to figure out is how to define in a webserver's profile that I want a blackbox_exporter that lives on the promtheus sever node to test it. The idea is to have the backbox_exporter remotely check the webserver. This all works fine if I put static configs in the hiera of the prometheus server, but I want to put configs with the profile of the thing that needs to be checked
Affected Puppet, Ruby, OS and module versions/distributions
8.3.0
How to reproduce (e.g Puppet code you use
prometheus server
prometheus client
shared yaml
What are you seeing
Catalog is applied without any errors but the prometheus server isn't collecting any node_exporter configs.
What behaviour did you expect instead
I expect to see the catalog applied but also the prometheus server collecting node_exporter configs.
Output log
Any additional information you'd like to impart
I think this is more of a usage question then a bug but I feel the examples do not really show how to exercise this collector behavior.