Open nahsi opened 2 years ago
:wave: I'm not sure if the network labels have any influence here since you don't seem to filter on them.
Did you try in a different networking mode?
@jeschkies other networking mode is native docker one - it works just fine. I hope I will have time to debug this issue a little bit more soon. So I will be back with more details.
Although I feel that Nomad needs its own discovery config.
Although I feel that Nomad needs its own discovery config.
I tend to agree. However, as Promtail inherits Prometheus' service discovery I believe it should be Consul SD.
Could you turn on the debug logs for promtail? Maybe there is some information there. You should see some of the debug logs from the Docker target_group.
I'm experiencing the same issue.
Promtail config:
scrape_configs:
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
Promtail logs with log_level: debug
:
level=debug ts=<time> caller=target.go:203 target=docker/<id> msg="starting process loop" container=<id>
Promtail picks up 10 containers, all of which are Nomad init containers with NetworkMode: none
. All my actual services are set to bridge mode, and Promtail doesn't see them. docker ps | wc -l
shows 36 running containers.
Here's Promtail's target view:
__address__=":80"
__meta_docker_container_id="<id>"
__meta_docker_container_name="/nomad_init_<id>"
__meta_docker_container_network_mode="none"
__meta_docker_network_id="<id>"
__meta_docker_network_ingress="false"
__meta_docker_network_internal="false"
__meta_docker_network_ip=""
__meta_docker_network_name="none"
__meta_docker_network_scope="local"
Could this be related? https://grafana.com/docs/loki/latest/clients/promtail/configuration/#docker_sd_config
# The port to scrape metrics from, when `role` is nodes, and for discovered
# tasks and services that don't have published ports.
[ port: <int> | default = 80 ]
No containers have mapped ports (including the nomad_init containers that Promtail sees!), it's all mapped via iptables.
Could this be related?
Aah yes. I vaguely remember that containers must have a port to be discovered. We leverage the service discovery from Prometheus. It makes sense that they only use containers with ports. Here's the Prometheus code.
Hello folks 👋
I just ran into this issue while trying to setup Promtail with Nomad + Consul Connect. Containers using Consul Connect must be on bridge
network mode and in that case there are no exposed ports or anything like that. The value of c.NetworkSettings.Networks
ends up empty, so containers are not discovered. Here is an extract from NetworkSettings
when doing docker inspect
on one of these containers:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
Are there any known workarounds to this issue? Thanks!
@jeschkies with Prometheus already supporting nomad_sd_config
are there plans to bring it to promtail
too? Code should be here https://github.com/prometheus/prometheus/blob/a5a4eab679ccf2d432ab34b5143ef9e1c6482839/discovery/nomad/nomad.go#L137
Related to https://github.com/grafana/loki/issues/5464
I am also having this exact issue, which took me some time to figure out why most my nomad jobs were not being processed by promtail. As consul connect mesh requires the use of bridged network mode, this is a huge blocker. While I understand the re-use of prometheus code for scraping, it is obviously not relevant to care about network configuration when dealing with logging system that has access to those logs locally. Seems like it is not a perfect fit to reuse in this case since prometheus requires and endpoint to scrape and promtail does not.
Likewise I just ran into this after trying to figure out if I'd messed up my ContainerList filters somehow. It kind of defeats the point of asking docker for logs if everything must be exporting a port. I have a ton of batch compute containers that don't even have network access at all, but I still care about their logs.
It's been a while. Did you try the Grafana Agent? It gets more attention than promtail right now.
While I get that the grafana team may be focusing on other projects right now, I all too often get "have you tried $other_product" when submitting issues against grafana maintained projects. Its enough to really put me off of further use of any part of the stack because its the same kind of update treadmill I watch JS devs contend with. It would be much more productive to slap a banner on the README.md file saying that the team is stepping away from a product and deprecating it if that's the intention.
@the-maldridge I hear you. I'm confused myself with how we handle Promtail requests.
I've went through this old issue again to refresh my memory.
@jeschkies with Prometheus already supporting nomad_sd_config are there plans to bring it to promtail too?
@AAverin makes a good point. However, it must be a community contribution as I'm afraid the team will not prioritize supporting nomad_sd_config
.
Struggled with this as well. The workaround I settled on was to drop service discovery, and set the default docker logging plugin to journald in the Nomad client nomad.hcl
.
plugin "docker" {
config {
extra_labels = ["*"]
logging {
type = "journald"
config {
labels-regex = "com\\.hashicorp\\.nomad.*"
}
}
}
}
Then, grafana agent running on the host only needs to read the journal to get everything to loki, including logs from Nomad and Docker daemons, envoy sidecar proxies, tasks without ports, etc, etc.
// Sample config for Grafana Agent Flow.
//
// For a full configuration reference, see https://grafana.com/docs/agent/latest/flow/
logging {
level = "warn"
}
loki.relabel "journal" {
forward_to = []
rule {
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
rule {
source_labels = ["__journal__hostname"]
target_label = "host"
}
rule {
source_labels = ["__journal_syslog_identifier"]
target_label = "syslog_identifier"
}
rule {
source_labels = ["__journal_com_hashicorp_nomad_job_name"]
target_label = "nomad_job"
}
rule {
source_labels = ["__journal_com_hashicorp_nomad_task_group_name"]
target_label = "nomad_group"
}
rule {
source_labels = ["__journal_com_hashicorp_nomad_alloc_id"]
target_label = "nomad_alloc_id"
}
rule {
source_labels = ["__journal_com_hashicorp_nomad_task_name"]
target_label = "nomad_task"
}
}
loki.source.journal "read" {
forward_to = [loki.write.endpoint.receiver]
relabel_rules = loki.relabel.journal.rules
labels = {component = "loki.source.journal"}
}
loki.write "endpoint" {
endpoint {
url = "https://my.loki.host/loki/api/v1/push"
}
}
Grafana Agent became Grafana Alloy, but as it is also reusing prometheus code under the hood for discovery.docker
, the same bug is still present there. Just wasted 2 weeks of my time debugging to land back at this thread almost by mistake.
Another issue with Alloy is that it doesn't have a way to feed nomad logs to loki, so the only possible way is to do discovery.docker
+ loki.source.docker
, and discovery is just broken for bridge network containers.
In that case what @hitchfred did is probably the only viable solution
I'm running promtail at host, scraping docker socket to get the logs of containers running in Nomad.
Some containers are running in
bridge
mode. In this mode Nomad will create a bridge interface and a separate iptables rules chain and will use it for container networking.Containers running in Nomad bridge networking mode are not scraped by promtail. I guess the issue is somewhere in the code responsible for creation of
__meta_docker_network_.*
labels?My promtail config: