grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
24.02k stars 3.46k forks source link

Using `docker_sd_config` with Nomad bridge networking #6165

Open nahsi opened 2 years ago

nahsi commented 2 years ago

I'm running promtail at host, scraping docker socket to get the logs of containers running in Nomad.

Some containers are running in bridge mode. In this mode Nomad will create a bridge interface and a separate iptables rules chain and will use it for container networking.

Containers running in Nomad bridge networking mode are not scraped by promtail. I guess the issue is somewhere in the code responsible for creation of __meta_docker_network_.* labels?

image image

My promtail config:

  server:
    http_listen_port: 9380
    http_listen_address: "{{ private_ip }}"
    grpc_listen_port: 0

  positions:
    filename: "{{ promtail_dir }}/positions.yml"

  clients:
    - url: "https://loki-distributor.service.consul/loki/api/v1/push"
      external_labels:
        dc: "{{ dc }}"
        instance: "{{ inventory_hostname }}"
      basic_auth:
        username: "promtail"
        password: "{{ lookup('hashi_vault', 'secret/data/promtail/loki:password') }}"

  scrape_configs:
    - job_name: "syslog"
      syslog:
        listen_address: "127.0.0.1:1514"
        idle_timeout: "3600s"
        use_incoming_timestamp: true
      relabel_configs:
        - source_labels: ["__syslog_message_hostname"]
          target_label: "instance"
        - source_labels: ["__syslog_message_app_name"]
          target_label: "app"
        - source_labels: ["__syslog_message_facility"]
          target_label: "facility"
      pipeline_stages:
        - match:
            selector: '{app=~"({{ _syslog | join("|")}})"}'
            stages:
              - static_labels:
                  filtered: "true"
                  source: "syslog"
        - match:
            selector: '{filtered!="true"}'
            action: drop
            drop_counter_reason: "syslog-filter"
        - labeldrop:
            - "filtered"

    - job_name: "opnsense"
      syslog:
        listen_address: "{{ private_ip }}:1514"
        idle_timeout: "3600s"
        use_incoming_timestamp: true
        labels:
          source: "opnsense"
      relabel_configs:
        - source_labels: ["__syslog_message_hostname"]
          target_label: "instance"
        - source_labels: ["__syslog_message_app_name"]
          target_label: "app"
        - source_labels: ["__syslog_message_facility"]
          target_label: "facility"

    - job_name: "promtail"
      static_configs:
        - labels:
            __path__: "/var/log/promtail/promtail.log"
            source: "system"
            app: "promtail"
      pipeline_stages:
        - regex:
            expression: '^level=(?P<level>\S+) ts=(?P<time>\S+) .*'
        - timestamp:
            source: time
            format: "RFC3339Nano"

    - job_name: "portage"
      static_configs:
        - labels:
            __path__: "/var/log/emerge.log"
            source: "system"
            app: "portage"
      pipeline_stages:
        - regex:
             expression: '^(?P<time>\d+):.*'
        - timestamp:
            source: "time"
            format: "Unix"

    - job_name: "vault"
      static_configs:
        - labels:
            __path__: "/var/log/vault/vault.log"
            source: "system"
            app: "vault"
      pipeline_stages:
        - multiline:
            firstline: '^\S+ [\S+]'
        - regex:
            expression: '^(?P<time>\S+) [(?P<level>\S+)] .*'
        - timestamp:
            source: "time"
            format: "RFC3339Nano"

    - job_name: "vault-agent"
      static_configs:
        - labels:
            __path__: "/var/log/vault-agent/vault-agent.log"
            source: "system"
            app: "vault-agent"
      pipeline_stages:
        - multiline:
            firstline: '^\S+ [\S+]'
        - regex:
            expression: '^(?P<time>\S+) [(?P<level>\S+)] .*'
        - timestamp:
            source: "time"
            format: "RFC3339Nano"

    - job_name: "nomad"
      static_configs:
        - labels:
            __path__: "/var/log/nomad/nomad.log"
            source: "system"
            app: "nomad"
      pipeline_stages:
        - multiline:
            firstline: '^\S+ [\S+]'
        - regex:
            expression: '^(?P<time>\S+) [(?P<level>\S+)] .*'
        - timestamp:
            source: "time"
            format: "RFC3339Nano"

    - job_name: "consul"
      static_configs:
        - labels:
            __path__: "/var/log/consul/*"
            source: "system"
            app: "consul"
      pipeline_stages:
        - multiline:
            firstline: '^\S+ [\S+]'
        - regex:
            expression: '^(?P<time>\S+) [(?P<level>\S+)] .*'
        - timestamp:
            source: "time"
            format: "RFC3339Nano"

    - job_name: "docker"
      static_configs:
        - labels:
            __path__: "/var/log/docker.log"
            source: "system"
            app: "docker"
      pipeline_stages:
        - timestamp:
            source: "time"
            format: "RFC3339"

    - job_name: "telegraf"
      static_configs:
        - labels:
            __path__: "/var/log/telegraf/telegraf.log"
            source: "system"
            app: "telegraf"
      pipeline_stages:
        - regex:
             expression: "^(?P<time>.+) (?P<level>.)! .*"
        - labels:
            level:
        - timestamp:
            source: "time"
            format: "RFC3339"

    - job_name: "docker"
      docker_sd_configs:
        - host: "unix:///var/run/docker.sock"
          refresh_interval: "5s"
      relabel_configs:
        - source_labels: ['__meta_docker_container_label_com_hashicorp_nomad_alloc_id']
          target_label: "alloc_id"
        - source_labels: ['__meta_docker_container_label_com_hashicorp_nomad_namespace']
          target_label: "namespace"
        - source_labels: ['__meta_docker_container_label_com_hashicorp_nomad_job_name']
          target_label: "job"
        - source_labels: ['__meta_docker_container_label_com_hashicorp_nomad_task_group_name']
          target_label: "group"
        - source_labels: ['__meta_docker_container_label_com_hashicorp_nomad_task_name']
          target_label: "task"
      pipeline_stages:
        - static_labels:
            source: "nomad"

        - match: # traefik
            selector: '{task="traefik"}'
            pipeline_name: "traefik"
            stages:
            - regex:
                expression: '^time="(?P<time>.*)" level=(?P<level>.*) .*'
            - timestamp:
                source: time
                format: "RFC3339"
            - static_labels:
                filtered: "true"

        - match: # grafana
            selector: '{job="grafana",group="grafana",task="grafana"}'
            pipeline_name: "grafana"
            stages:
            - regex:
                expression: '^t=(?P<time>\S+) lvl=(?P<level>\S+).*$'
            - timestamp:
                source: time
                format: "2006-01-02T15:04:05-0700"
            - static_labels:
                filtered: "true"

        - match: # loki
            selector: '{job="loki",task!~"connect-.*"}'
            pipeline_name: "loki"
            stages:
            - regex:
                expression: '^.* ts=(?P<time>\S+).*$'
            - timestamp:
                source: time
                format: "RFC3339Nano"
            - static_labels:
                filtered: "true"

        - match: # promtail
            selector: '{task="promtail"}'
            pipeline_name: "promtail"
            stages:
            - regex:
                expression: '^level=(?P<level>\S+) ts=(?P<time>\S+).*$'
            - timestamp:
                source: time
                format: "RFC3339Nano"
            - static_labels:
                filtered: "true"

        - match: # victoria-metrics
            selector: '{job="victoria-metrics",task=~"victoria-metrics|vmagent"}'
            pipeline_name: "victoria-metrics"
            stages:
            - regex:
                expression: '^(?P<time>\S+)\s+(?P<level>\S+)\s+(?P<function>\S+)\s+.*'
            - timestamp:
                source: time
                format: "RFC3339Nano"
            - static_labels:
                filtered: "true"

        - match: # postgres
            selector: '{job="postgres",task="patroni"}'
            pipeline_name: "postgres"
            stages:
            - multiline:
                firstline: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'
            - regex:
                expression: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\,\d{3}).*'
            - timestamp:
                source: time
                format: "2006-01-02 15:04:05,999"
            - drop:
                expression: ".*INFO: no action.*"
                drop_counter_reason: "noise-filter"
            - static_labels:
                filtered: "true"

        - match: # postgres-exporter
            selector: '{task="postgres-exporter"}'
            pipeline_name: "postgres-exporter"
            stages:
            - regex:
                expression: '^ts=(?P<time>\S+) .*'
            - timestamp:
                source: time
                format: "RFC3339Nano"
            - static_labels:
                filtered: "true"

        - match: # mariadb
            selector: '{job="mariadb",task="mariadb"}'
            pipeline_name: "mariadb"
            stages:
            - multiline:
                firstline: '^\d{4}-\d{2}-\d{2}  \d{2}:\d{2}:\d{2}'
            - regex:
                expression: '^(?P<time>\d{4}-\d{2}-\d{2}  \d{2}:\d{2}:\d{2}).*'
            - timestamp:
                source: time
                format: "2006-01-02  15:04:05"
            - drop:
                source: "__meta_docker_container_log_stream"
                value: "stdout"
                drop_counter_reason: "noise-filter"
            - static_labels:
                filtered: "true"

        - match: # maxscale
            selector: '{job="mariadb",task="maxscale"}'
            pipeline_name: "maxscale"
            stages:
            - multiline:
                firstline: '^\d{4}-\d{2}-\d{2}  \d{2}:\d{2}:\d{2}'
            - regex:
                expression: '^(?P<time>\d{4}-\d{2}-\d{2}  \d{2}:\d{2}:\d{2}).*'
            - timestamp:
                source: time
                format: "2006-01-02 15:04:05"
            - drop:
                source: "__meta_docker_container_log_stream"
                value: "stderr"
                drop_counter_reason: "noise-filter"
            - static_labels:
                filtered: "true"

        - match: # nats
            selector: '{job="nats",task="nats"}'
            pipeline_name: "nats"
            stages:
            - regex:
                expression: '.*(?P<time>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{6}) .*'
            - timestamp:
                source: time
                format: "2006/01/02 15:04:05.999999"
            - static_labels:
                filtered: "true"

        - match: # minio
            selector: '{job="minio",task="minio"}'
            pipeline_name: "minio"
            stages:
            - multiline:
                firstline: '^\\S+'
            - static_labels:
                filtered: "true"

        - match: # seaweedfs
            selector: '{job="seaweedfs"}'
            pipeline_name: "seaweedfs"
            stages:
            - static_labels:
                filtered: "true"

        - match: # seaweedfs-csi
            selector: '{job="seaweedfs-csi"}'
            pipeline_name: "seaweedfs-csi"
            stages:
            - static_labels:
                filtered: "true"

        - match: # redis
            selector: '{task="redis"}'
            pipeline_name: "redis"
            stages:
            - regex:
                expression: '^.* (?P<time>\d{2} \w{3} \d{4} \d{2}:\d{2}:\d{2}\.\d{3}).*'
            - timestamp:
                source: time
                format: "02 Jan 2006 15:04:05.999"
            - static_labels:
                filtered: "true"

        - match: # resec
            selector: '{task="resec"}'
            pipeline_name: "resec"
            stages:
            - regex:
                expression: '^time="(?P<time>.*)" level=(?P<level>.*) .*'
            - timestamp:
                source: time
                format: "RFC3339"
            - static_labels:
                filtered: "true"

        - match: # wildduck
            selector: '{job="mail",task="wildduck"}'
            pipeline_name: "mail"
            stages:
            - static_labels:
                filtered: "true"

        - match: # zone-mta
            selector: '{job="mail",task="zone-mta"}'
            pipeline_name: "mail"
            stages:
            - static_labels:
                filtered: "true"

        - match: # haraka
            selector: '{job="mail",task="haraka"}'
            pipeline_name: "mail"
            stages:
            - drop:
                source: "__meta_docker_container_log_stream"
                value: "stderr"
                drop_counter_reason: "noise-filter"
            - multiline:
                firstline: '^\S+'
            - regex:
                expression: '^(?P<time>\S+) .*'
            - timestamp:
                source: time
                format: "RFC3339Nano"
            - static_labels:
                filtered: "true"

        - match: # sftpgo
            selector: '{job="sftpgo"}'
            pipeline_name: "sftpgo"
            stages:
            - json:
                expressions:
                  time:
                  user_agent:
            - drop:
                source: "user_agent"
                value: "Consul Health Check"
                drop_counter_reason: "noise-filter"
            - timestamp:
                source: time
                format: "2006-01-02T15:04:05.999"
            - static_labels:
                filtered: "true"

        - match: # filestash
            selector: '{job="filestash"}'
            pipeline_name: "filestash"
            stages:
            - regex:
                expression: '^(?P<time>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) .*'
            - timestamp:
                source: time
                format: "2006/01/02 15:04:05"
            - static_labels:
                filtered: "true"

        - match: # transmission
            selector: '{job="transmission",task="transmission"}'
            pipeline_name: "transmission"
            stages:
            - drop:
                source: "__meta_docker_container_log_stream"
                value: "stdout"
                drop_counter_reason: "noise-filter"
            - static_labels:
                filtered: "true"

        - match: # home-assistant
            selector: '{job="home-assistant"}'
            pipeline_name: "home-assistant"
            stages:
            - drop:
                source: "__meta_docker_container_log_stream"
                value: "stdout"
                drop_counter_reason: "noise-filter"
            - regex:
                expression: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) .*'
            - timestamp:
                source: time
                format: "2006-01-02 15:04:05"
            - static_labels:
                filtered: "true"

        - match: # jellyfin
            selector: '{job="jellyfin"}'
            pipeline_name: "jellyfin"
            stages:
            - drop:
                source: "__meta_docker_container_log_stream"
                value: "stderr"
                drop_counter_reason: "noise-filter"
            - regex:
                expression: '^[(?P<time>\S+] .*'
            - timestamp:
                source: time
                format: "15:04:05"
            - static_labels:
                filtered: "true"

        - match: # audiobookshelf
            selector: '{job="audiobookshelf"}'
            pipeline_name: "audiobookshelf"
            stages:
            - regex:
                expression: '^[(?P<time>\S+].*'
            - timestamp:
                source: time
                format: "RFC3339Nano"
            - static_labels:
                filtered: "true"

        - match: # drop
            selector: '{filtered!="true"}'
            action: drop
            drop_counter_reason: "nomad-filter"

        - labeldrop:
            - "filtered"
jeschkies commented 2 years ago

:wave: I'm not sure if the network labels have any influence here since you don't seem to filter on them.

Did you try in a different networking mode?

nahsi commented 2 years ago

@jeschkies other networking mode is native docker one - it works just fine. I hope I will have time to debug this issue a little bit more soon. So I will be back with more details.

Although I feel that Nomad needs its own discovery config.

jeschkies commented 2 years ago

Although I feel that Nomad needs its own discovery config.

I tend to agree. However, as Promtail inherits Prometheus' service discovery I believe it should be Consul SD.

Could you turn on the debug logs for promtail? Maybe there is some information there. You should see some of the debug logs from the Docker target_group.

pikeas commented 2 years ago

I'm experiencing the same issue.

Promtail config:

scrape_configs:
    - job_name: docker
      docker_sd_configs:
          - host: unix:///var/run/docker.sock

Promtail logs with log_level: debug:

level=debug ts=<time> caller=target.go:203 target=docker/<id> msg="starting process loop" container=<id>

Promtail picks up 10 containers, all of which are Nomad init containers with NetworkMode: none. All my actual services are set to bridge mode, and Promtail doesn't see them. docker ps | wc -l shows 36 running containers.

Here's Promtail's target view:

__address__=":80"
__meta_docker_container_id="<id>"
__meta_docker_container_name="/nomad_init_<id>"
__meta_docker_container_network_mode="none"
__meta_docker_network_id="<id>"
__meta_docker_network_ingress="false"
__meta_docker_network_internal="false"
__meta_docker_network_ip=""
__meta_docker_network_name="none"
__meta_docker_network_scope="local"

Could this be related? https://grafana.com/docs/loki/latest/clients/promtail/configuration/#docker_sd_config

# The port to scrape metrics from, when `role` is nodes, and for discovered
# tasks and services that don't have published ports.
[ port: <int> | default = 80 ]

No containers have mapped ports (including the nomad_init containers that Promtail sees!), it's all mapped via iptables.

jeschkies commented 2 years ago

Could this be related?

Aah yes. I vaguely remember that containers must have a port to be discovered. We leverage the service discovery from Prometheus. It makes sense that they only use containers with ports. Here's the Prometheus code.

ivantopo commented 1 year ago

Hello folks 👋

I just ran into this issue while trying to setup Promtail with Nomad + Consul Connect. Containers using Consul Connect must be on bridge network mode and in that case there are no exposed ports or anything like that. The value of c.NetworkSettings.Networks ends up empty, so containers are not discovered. Here is an extract from NetworkSettings when doing docker inspect on one of these containers:

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {}
        }

Are there any known workarounds to this issue? Thanks!

AAverin commented 1 year ago

@jeschkies with Prometheus already supporting nomad_sd_config are there plans to bring it to promtail too? Code should be here https://github.com/prometheus/prometheus/blob/a5a4eab679ccf2d432ab34b5143ef9e1c6482839/discovery/nomad/nomad.go#L137 Related to https://github.com/grafana/loki/issues/5464

gbolo commented 1 year ago

I am also having this exact issue, which took me some time to figure out why most my nomad jobs were not being processed by promtail. As consul connect mesh requires the use of bridged network mode, this is a huge blocker. While I understand the re-use of prometheus code for scraping, it is obviously not relevant to care about network configuration when dealing with logging system that has access to those logs locally. Seems like it is not a perfect fit to reuse in this case since prometheus requires and endpoint to scrape and promtail does not.

the-maldridge commented 10 months ago

Likewise I just ran into this after trying to figure out if I'd messed up my ContainerList filters somehow. It kind of defeats the point of asking docker for logs if everything must be exporting a port. I have a ton of batch compute containers that don't even have network access at all, but I still care about their logs.

jeschkies commented 9 months ago

It's been a while. Did you try the Grafana Agent? It gets more attention than promtail right now.

the-maldridge commented 9 months ago

While I get that the grafana team may be focusing on other projects right now, I all too often get "have you tried $other_product" when submitting issues against grafana maintained projects. Its enough to really put me off of further use of any part of the stack because its the same kind of update treadmill I watch JS devs contend with. It would be much more productive to slap a banner on the README.md file saying that the team is stepping away from a product and deprecating it if that's the intention.

jeschkies commented 9 months ago

@the-maldridge I hear you. I'm confused myself with how we handle Promtail requests.

I've went through this old issue again to refresh my memory.

@jeschkies with Prometheus already supporting nomad_sd_config are there plans to bring it to promtail too?

@AAverin makes a good point. However, it must be a community contribution as I'm afraid the team will not prioritize supporting nomad_sd_config.

hitchfred commented 9 months ago

Struggled with this as well. The workaround I settled on was to drop service discovery, and set the default docker logging plugin to journald in the Nomad client nomad.hcl.

plugin "docker" {
  config {
    extra_labels = ["*"]
    logging {
      type = "journald"
      config {
        labels-regex = "com\\.hashicorp\\.nomad.*"
      }
    }
  }
}

Then, grafana agent running on the host only needs to read the journal to get everything to loki, including logs from Nomad and Docker daemons, envoy sidecar proxies, tasks without ports, etc, etc.

// Sample config for Grafana Agent Flow.
//
// For a full configuration reference, see https://grafana.com/docs/agent/latest/flow/
logging {
  level = "warn"
}

loki.relabel "journal" {
  forward_to = []

  rule {
    source_labels = ["__journal__systemd_unit"]
    target_label  = "unit"
  }
  rule {
    source_labels = ["__journal__hostname"]
    target_label  = "host"
  }
  rule {
    source_labels = ["__journal_syslog_identifier"]
    target_label  = "syslog_identifier"
  }
  rule {
    source_labels = ["__journal_com_hashicorp_nomad_job_name"]
    target_label  = "nomad_job"
  }
  rule {
    source_labels = ["__journal_com_hashicorp_nomad_task_group_name"]
    target_label  = "nomad_group"
  }
  rule {
    source_labels = ["__journal_com_hashicorp_nomad_alloc_id"]
    target_label  = "nomad_alloc_id"
  }
  rule {
    source_labels = ["__journal_com_hashicorp_nomad_task_name"]
    target_label  = "nomad_task"
  }
}

loki.source.journal "read"  {
  forward_to    = [loki.write.endpoint.receiver]
  relabel_rules = loki.relabel.journal.rules
  labels        = {component = "loki.source.journal"}
}

loki.write "endpoint" {
  endpoint {
    url = "https://my.loki.host/loki/api/v1/push"
  }
}
AAverin commented 7 months ago

Grafana Agent became Grafana Alloy, but as it is also reusing prometheus code under the hood for discovery.docker, the same bug is still present there. Just wasted 2 weeks of my time debugging to land back at this thread almost by mistake.

Another issue with Alloy is that it doesn't have a way to feed nomad logs to loki, so the only possible way is to do discovery.docker + loki.source.docker, and discovery is just broken for bridge network containers.

In that case what @hitchfred did is probably the only viable solution