Open gfl-chris opened 2 years ago
Wow this solved my problem I have legitimately been stuck on this for 4 work days now and after multiple support cases and escalations leading nowhere and going back and forth with the sales rep, this is the option which is a break from how the agent used to work.
For all those who come after, the standard agent configuration via datadog.yaml
from the installation on Linux does not expose this option as far as I can tell. I had to uninstall and execute via docker to get it o work.
Hi I was wondering if this was still an issue with the Agent [7.40.0](https://github.com/DataDog/datadog-agent/releases/tag/7.40.0)
and above? The reason I'm asking is that we did some work to improve the way the Agent collects logs from containers (behind a feature flag called logs_config.cca_in_ad
) which was enabled by default in Agents 7.40.0
and above. I tried to recreate your issue and wasn't able to do so with version 7.42.1
of the Agent.
My config was just:
logs_enabled: true
logs_config:
container_collect_all: true
And I'd get:
==========
Logs Agent
==========
Reliable: Sending compressed logs in HTTPS to agent-http-intake.logs.datadoghq.eu on port 443
BytesSent: 12132
EncodedBytesSent: 5172
LogsProcessed: 19
LogsSent: 19
docker
------
- Type: docker
Service: info-only
Source: random
Status: OK
The log file tailer could not be made, falling back to socket
Inputs:
2d3192ca134c50f393065f4e398528d2720f5a1794fdc93d7e27a08efb085d77
Average Latency (ms): 0
24h Average Latency (ms): 0
Peak Latency (ms): 0
24h Peak Latency (ms): 0
Bytes Read: 1814
Lines Combined: 8
MultiLine matches: 9
This was running on:
Linux ****.compute.internal 4.14.301-224.520.amzn2.x86_64 #1 SMP Fri Dec 9 09:57:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
$ docker version
Client:
Version: 20.10.17
API version: 1.41
Go version: go1.18.6
Git commit: 100c701
Built: Sat Dec 3 04:13:49 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.18.6
Git commit: a89b842
Built: Sat Dec 3 04:14:27 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.8
GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
Version: 1.1.4
GitCommit: 5fd4c4d144137e991c4acebb2146ab1483a97925
docker-init:
Version: 0.19.0
GitCommit: de40ad0
The issue still reproducible in dd 7.42.1. Steps to reproduce:
awslogs
driver:
logging:
driver: awslogs
options:
awslogs-region: us-east-1
awslogs-group: my-datadog-log-group
tag: "{{.Name}}"
@farioas We found a bug in our Docker tailer that was fixed in 7.43.0
of the Agent (#15138). I was wondering if you could try again with that version?
Hi @carlosroman,
We're running 7.43.0 since Feb 23, no issues observed. Anyway we keep DD_LOGS_CONFIG_DOCKER_CONTAINER_USE_FILE=false
.
Agent Environment
With
DD_LOGS_CONFIG_DOCKER_CONTAINER_USE_FILE = false
:the agent attach to the Docker socket to tail containers' stdout/stderr and send container logs to HQ. But with
DD_LOGS_CONFIG_DOCKER_CONTAINER_USE_FILE
not set:the agent won't attach to the Docker socket to tail containers' stdout/stderr and isn't sending container logs to HQ.
Describe what happened: We predominantly use the awslogs driver to send logs to CloudWatch and as you can see the agent fails to find the containers' log files (because there are none). I had to set
DD_LOGS_CONFIG_DOCKER_CONTAINER_USE_FILE
tofalse
to make the agent finally collect container logs but now any container that uses the json driver won't have its logs collected (I think).Describe what you expected: Based on my reading of the Docker log collection from file issues:
It looks like the agent currently doesn't tail containers from the Docker socket if it cannot access the logs file path.
Steps to reproduce the issue: Do as I did above.
Additional environment details (Operating System, Cloud provider, etc):