Open abuttino opened 3 years ago
@abuttino do you have a screenshot or some more info? Not sure what you are entering as default values, so would be helpful to see exactly what you are doing and how to reproduce.
I have the 30-33 pipeline files in conf.d and the grok files in etc/logstash/patterns.d folder. I copied your grafana page for the stuff to input into the prometheus.yml file and made a new .yml for the blackbox_exporter and put my own domain on the blackbox_exporter, but didn't change the aliases (e.g. ping@.even though there are no addresses in my Postfix server for them)
smtp_starttls:
prober: tcp
timeout: 20s
tcp:
preferred_ip_protocol: ip4
tls_config:
insecure_skip_verify: true
query_response:
- expect: "^220 ([^ ]+) ESMTP (.+)$"
- send: "EHLO prober"
- expect: "^250-(.*)"
- send: "STARTTLS"
- expect: "^220"
- starttls: true
- send: "EHLO prober"
- expect: "^250-"
- send: "QUIT"
smtp_banner:
prober: tcp
timeout: 20s
tcp:
preferred_ip_protocol: ip4
query_response:
- expect: "^220 ([^ ]+) ESMTP (.+)$"
- send: "EHLO prober"
- expect: "^250-(.*)"
- send: "MAIL FROM:ping@mydomain.com"
- expect: "^250-(.*)"
- send: "RCPT TO:test.email@mydomain.com"
- expect: "^250-(.*)"
- send: "QUIT"
There are very little instructions on how to use this but this is my current logstash.conf that I put together from browsing, no with logstash or Elastic experience:
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
#input {
# beats {
# port => 5044
# }
#}
input {
file {
path => "/var/log/mail.log"
type => "postfix" # You can define a type however you like.
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
It finally started reading the log it when I gave logstash the proper rights.
I just put that together from what I saw on other forums and what made common sense.
However!
In order to use Elastic Search for the Grafana dashboard, it won't save it because there is no timestamp info or ???
I saw a couple of your other repositories and they had instructions of what to input for the elastic search plugin.
So, without that, there is no way I can get any further. A real shame too! It looks like a great dashboard.
Ok, I changed a few things around and got Grafana to see things from ElasticSearch by using beats, but after receiving several emails nothing seems to be tallying. It would almost appear that ES isn't using the 30-33 pipelines. I will try to define them in the logstash.yml to get them to start working.
Given this grafana dashboard is so nice, I am willing to put in the extra work to get it working, with zero knowledge of logstash or ES.
Here is what I now have for the logstash.conf in the conf.d folder:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "postfix-%{+YYYY.MM.dd}"
}
}
@abuttino do you have a screenshot or some more info? Not sure what you are entering as default values, so would be helpful to see exactly what you are doing and how to reproduce.
I believe you asked for a screenshot as well?
blackbox.yml
smtp_starttls:
prober: tcp
timeout: 20s
tcp:
preferred_ip_protocol: ip4
tls_config:
insecure_skip_verify: true
query_response:
- expect: "^220 ([^ ]+) ESMTP (.+)$"
- send: "EHLO prober"
- expect: "^250-(.*)"
- send: "STARTTLS"
- expect: "^220"
- starttls: true
- send: "EHLO prober"
- expect: "^250-"
- send: "QUIT"
smtp_banner:
prober: tcp
timeout: 20s
tcp:
preferred_ip_protocol: ip4
query_response:
- expect: "^220 ([^ ]+) ESMTP (.+)$"
- send: "EHLO prober"
- expect: "^250-(.*)"
- send: "MAIL FROM:ping@mydomain.com"
- expect: "^250-(.*)"
- send: "RCPT TO:test.email@mydomain.com"
- expect: "^250-(.*)"
- send: "QUIT"
prometheus.yml
# Sample config for Prometheus.
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'example'
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: node
# If prometheus-node-exporter is installed, grab stats about the local
# machine by default.
static_configs:
- targets: ['localhost:9154']
- job_name: 'smtp_status_tls'
metrics_path: /probe
params:
module: [smtp_starttls]
static_configs:
- targets: [
'mydomain.com'
]
relabel_configs:
# Ensure port is 25, pass as URL parameter
- source_labels: [__address__]
regex: (.*)(:.*)?
replacement: ${1}:587
target_label: __param_target
# Make instance label the target
- source_labels: [__param_target]
target_label: instance
# Actually talk to the blackbox exporter though
- target_label: __address__
replacement: 127.0.0.1:9115
- job_name: 'smtp_status'
metrics_path: /probe
params:
module: [smtp_banner]
static_configs:
- targets: [
'mydomain.com'
]
relabel_configs:
# Ensure port is 25, pass as URL parameter
- source_labels: [__address__]
regex: (.*)(:.*)?
replacement: ${1}:25
target_label: __param_target
# Make instance label the target
- source_labels: [__param_target]
target_label: instance
# Actually talk to the blackbox exporter though
- target_label: __address__
replacement: 127.0.0.1:9115
Any help would be much appreciated.
Hi,
I want to configure and use your dashboard, but like him i have already setup grafana logstash and elasticsearch. But impossible to use the dashboard correctly, there is no data to be find by datasource.
Maybe you have some informations to use correctly theses dahsboard ?
Thank you so much for you help
Hi, I want to configure and use your dashboard, but like him i have already setup grafana with SQL Server. But there is no data to be find by datasource.
¿What should I put in "Project Search"? ¿Does the sql base have to be local on the same server? The Data source is OK.
Maybe you have some informations to use correctly theses dahsboard ?
Thanks for your help
@abuttino do you have a screenshot or some more info? Not sure what you are entering as default values, so would be helpful to see exactly what you are doing and how to reproduce.
I believe you asked for a screenshot as well?
blackbox.yml
smtp_starttls: prober: tcp timeout: 20s tcp: preferred_ip_protocol: ip4 tls_config: insecure_skip_verify: true query_response: - expect: "^220 ([^ ]+) ESMTP (.+)$" - send: "EHLO prober" - expect: "^250-(.*)" - send: "STARTTLS" - expect: "^220" - starttls: true - send: "EHLO prober" - expect: "^250-" - send: "QUIT" smtp_banner: prober: tcp timeout: 20s tcp: preferred_ip_protocol: ip4 query_response: - expect: "^220 ([^ ]+) ESMTP (.+)$" - send: "EHLO prober" - expect: "^250-(.*)" - send: "MAIL FROM:ping@mydomain.com" - expect: "^250-(.*)" - send: "RCPT TO:test.email@mydomain.com" - expect: "^250-(.*)" - send: "QUIT"
prometheus.yml
# Sample config for Prometheus. global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s # By default, scrape targets every 15 seconds. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'example' # Load and evaluate rules in this file every 'evaluation_interval' seconds. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s scrape_timeout: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: node # If prometheus-node-exporter is installed, grab stats about the local # machine by default. static_configs: - targets: ['localhost:9154'] - job_name: 'smtp_status_tls' metrics_path: /probe params: module: [smtp_starttls] static_configs: - targets: [ 'mydomain.com' ] relabel_configs: # Ensure port is 25, pass as URL parameter - source_labels: [__address__] regex: (.*)(:.*)? replacement: ${1}:587 target_label: __param_target # Make instance label the target - source_labels: [__param_target] target_label: instance # Actually talk to the blackbox exporter though - target_label: __address__ replacement: 127.0.0.1:9115 - job_name: 'smtp_status' metrics_path: /probe params: module: [smtp_banner] static_configs: - targets: [ 'mydomain.com' ] relabel_configs: # Ensure port is 25, pass as URL parameter - source_labels: [__address__] regex: (.*)(:.*)? replacement: ${1}:25 target_label: __param_target # Make instance label the target - source_labels: [__param_target] target_label: instance # Actually talk to the blackbox exporter though - target_label: __address__ replacement: 127.0.0.1:9115
Any help would be much appreciated.
Did solved this issue. Though this post is long time ago, haven't found any solution to monitor postfix with elasticsearch or grafana.
After getting all the software installed and the files in the right places, I am stuck setting the data source for ElasticSearch. What am I supposed to be inputting there to have it save and allow to be used for this template? The default values are not cutting it.