Closed timvanderkooi closed 5 years ago
hey @timvanderkooi, thanks for raising the issue. We're noticing that the logs are being populated for pause containers, since their network mode is none
when coupled with K8s (since networking is delegated to an underlying network plugin). This was introduced in 6.12.0, and although the logs are harmless we'll work on this and update you here when a change is made.
A fix has been merged for this, it will be available in the next release. Therefore, I think we can close this issue now.
@xlucas thanks! Appreciate it.
Odd, Datadog itself doesn't log errors, but when the same server is also instrumented, eg with LogDNA, then LogDNA logs these errors from the datadog-agent:
Dec 10 22:15:01 1c714f9dd34a datadog-agent INFO UTC | PROCESS | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /frontend will be missing network info: %!s(<nil>)
Dec 10 22:15:01 1c714f9dd34a datadog-agent INFO UTC | PROCESS | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /engine will be missing network info: %!s(<nil>)
Dec 10 22:15:01 1c714f9dd34a datadog-agent INFO UTC | PROCESS | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /nginx will be missing network info: %!s(<nil>)
Dec 10 22:15:01 1c714f9dd34a datadog-agent INFO UTC | PROCESS | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /redis will be missing network info: %!s(<nil>)
Dec 10 22:15:08 1c714f9dd34a datadog-agent INFO UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /frontend will be missing network info: %!s(<nil>)
Dec 10 22:15:08 1c714f9dd34a datadog-agent INFO UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /engine will be missing network info: %!s(<nil>)
Dec 10 22:15:08 1c714f9dd34a datadog-agent INFO UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /nginx will be missing network info: %!s(<nil>)
Dec 10 22:15:08 1c714f9dd34a datadog-agent INFO UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /redis will be missing network info: %!s(<nil>)
I have the same error:
2020-01-28 12:45:27 UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /calico-rr will be missing network info: %!s(<nil>) 2020-01-28 12:45:33 UTC | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | kubelet:d884b5186b651429 | (kubelet.py:317) | GET on kubelet s
/stats/summaryfailed: 403 Client Error: Forbidden for url: https://192.168.0.94:10250/stats/summary/?verbose=True 2020-01-28 12:45:42 UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /etcd1 will be missing network info: %!s(<nil>) 2020-01-28 12:45:42 UTC | CORE | ERROR | (pkg/util/docker/containers.go:110 in ListContainers) | Failed to get host IPs. Container /calico-rr will be missing network info: %!s(<nil>)
I'm using k8s and deployment datadog-agent:7.16.1
I have the same error and did some tests. I noticed that the problem occurs with Datadog 7 (installed using helm chart version 1.39.5). When I switched to Datadog 6.14.0, it was working fine.
I was getting this error when running agent version 6.12.0-jmx but fixed it by upgrading to 7.19.2-jmx.
Describe what happened:
Every minute, the agent will log something like:
2019-06-28 20:57:02 UTC | PROCESS | WARN | (pkg/util/docker/containers.go:223 in parseContainerNetworkAddresses) | Unable to parse IP: for container: /twistlock_defender_18_11_128"
This started occurring after the latest update to 6.12. Since we have our agent set to WARN logging, this floods our logging every minute with these types of logs from all of containers without IPs.
Describe what you expected:
This logging should be INFO instead of WARN. No IPs is an expected part of our deployment since some containers do not require an IP.
Steps to reproduce the issue:
Create a container without an IP
Additional environment details (Operating System, Cloud provider, etc):