Open honzakral opened 1 year ago
Can you clarify which keys are duplicated? In the example provided the keys for each object look unique unless I'm missing something obvious. Edit: what I was missing is that jq
removes duplicates by default.
ecs.version
in my example. You cannot use jq
or other tools to format the json because it assumes the keys are unique and so it removes the duplicities.
Ah when I looked at this the first time I looked at it I filtered it through jq, which explains why I can't see duplicates.
The actual formatted output is:
{
"log.level": "error",
"@timestamp": "2023-03-25T16:52:40.879Z",
"message": ".....",
"component": {
"binary": "filebeat",
"dataset": "elastic_agent.filebeat",
"id": "filestream-default",
"type": "filestream"
},
"log": {
"source": "filestream-default"
},
"id": "6B22CE6262FCE2A2",
"log.logger": "input.filestream",
"log.origin": {
"file.line": 168,
"file.name": "input-logfile/harvester.go"
},
"service.name": "filebeat",
"source_file": "filestream::.global::native::398408-2049",
"ecs.version": "1.6.0",
"ecs.version": "1.6.0"
}
We are picking up the ecs.version
of the process that emitted the log (filebeat) and then adding it again when the agent ingests it and writes it to its log file.
I'm running Elastic 8.9.0 and started seeing the same error on one of my systems.
Cannot index event publisher.Event
"caused_by":{"type":"i_o_exception","reason":"Duplicate field 'ecs.version'
I have a massive JSON, so can't post it here, but I see:
"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}"
@belimawr @honzakral Any idea how I can fix this?
@rodrigc could you elaborate more on how it started happening for you? So far I've seen the ecs.version
duplication, but I have not seen ES refusing to ingest the document. Can you consistently reproduce it or is it specific to one of your environments/deployments?
@belimawr This problem started happening for me recently (past few days). I'm not sure why it started happening or what caused it. I'm not using beats directly. I'm using Elastic Agent with an Elastic Agent Policy distributed by Fleet. In the Elastic Agent console, I'm seeing:
Duplicate field 'ecs.version'
...
dropping event!
The log I am getting for this is massive, so I can't post here. However, I have opened a ticket at support.elastic.co and referenced this GitHub issue as a similar problem, so hopefully I will be able to get to the bottom of this problem with Elastic's help.
I pulled out the message field from the massive error log I got, and tried to reformat it to make it more readable. I think the log I have which is causing the problem looks like:
I took the message field from the error log which I provided in x12.txt,
and tried to re-format it to make it more readable. See attached x13.txt file for the full error.
I think the portion of the error which is causing a problem is this part:
reason":"Duplicate field 'ecs.version at [Source: (org.elasticsearch.common.io.stream.ByteBufferStreamInput); line: 1, column: 3541]"}}, dropping event!", "component":{"binary":"filebeat", "dataset":"elastic_agent.filebeat", "id":"filestream-default", "type":"filestream"}, "log":{"source":"filestream-default"}, "log.logger":"elasticsearch", "log.origin":{"file.line":446, "file.name":"elasticsearch/client.go"}, "service.name":"filebeat", "ecs.version":"1.6.0", "ecs.version":"1.6.0"}", "orchestrator":{"cluster":{"name":"kubernetes", "url":"[redacted]"}}, "orchestrator.cluster":{"name":"mycluster", "url":"[redacted]"}, "stream":"stderr"}, Private:(*input_logfile.updateOp)(0xc004f13110), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {"type":"illegal_argument_exception", "reason":"[1:10932] Duplicate field 'ecs.version at [Source: (org.elasticsearch.common.io.stream.ByteBufferStreamInput); line: 1, column: 10932]",
Specifically this part:
"ecs.version":"1.6.0", "ecs.version":"1.6.0"}",
I'm definitely seeing the `ecs.version` in there twice for some reason, and don't know where that is coming from.
The reason is explained at the bottom of https://github.com/elastic/elastic-agent/issues/2398#issuecomment-1485503934
The agent collects logs from sub-processes, with each process emitting ecs.version
in each log and then the agent logs that message itself with some additional context and the agent logger also adds ecs.version
. We can probably fix this by having the agent conditionally add the ecs.version
field if it isn't present.
That this is happening and the reason why is known, we haven't fixed it because until now it wasn't actually causing any problems. The failed ingestion here is new.
Did you update anything your system? An integration? The stack? Change any configurations in Elasticsearch?
Something has gotten stricter. If there was an integration update we have started enabling TSDS by default and that might be the cause here, although I would have expected this to have been caught in the automated tests we have for shipping logs to Fleet.
It's hard for me to tell what exactly changed to cause this problem. I'm doing everything through elastic-agent and agent policies in the Fleet UI.
I did not change the agent policy that is connected with this agent.
One thing that I did change, which may or may not have relevance is related to the questions I asked in this thread: https://discuss.elastic.co/t/can-i-omit-es-username-es-password-kibana-fleet-username-kibana-fleet-password-from-elastic-agent-kubernetes-daemonset/342558
As part of cleaning up our use of credentials,
for this particular agent, which is running as a "Fleet managed" agent in Kubernetes,
I removed KIBANA_FLEET_USERNAME
and KIBANA_FLEET_PASSWORD
from the Kubernetes daemonset running elastic-agent.
The elastic-agent daemonset has valid values fo FLEET_URL
and FLEET_ENROLLMENT_TOKEN
, so that is the bare minimum that gets the connectivity between elastic-agent and Kibana and Elasticsearch going.
But that's the only thing I changed recently.
On Elastic side I opened case #01479128 at support.elastic.co, and referred to this GitHub issue.
@rodrigc the changes you mentioned should not influence/cause the issues you're experiencing. As @cmacknz mentioned, having ingest issues is new for us.
Thanks for opening the ticket, support should be able to help you. And if needed it will reach us.
Did this ever get fixed?
I'm currently facing the same issue.
Setup:
Elastic agent debug output:
[elastic_agent.filebeat][debug] Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2024, time.June, 9, 6, 28, 20, 310365774, time.UTC), Meta:{"input_id":"filestream-docker-021adcf5-730c-4adc-adbe-38ff0ec64b90-docker-e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa","raw_index":"logs-docker.container_logs-default","stream_id":"docker-container-logs-fleet-server-e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa"}, Fields:{"agent":{"ephemeral_id":"5c88f549-5de5-4b80-b5c7-0084e2777353","id":"14794bc2-740f-4ec6-a233-3ca124727263","name":"fleet-server","type":"filebeat","version":"8.13.4"},"container":{"id":"e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa","image":{"name":"docker.elastic.co/beats/elastic-agent:8.13.4"},"labels":{"description":"Agent manages other beats based on configuration provided.","io_k8s_description":"Agent manages other beats based on configuration provided.","io_k8s_display-name":"Elastic-Agent image","license":"Elastic License","maintainer":"infra@elastic.co","name":"elastic-agent","net_unraid_docker_managed":"dockerman","org_label-schema_build-date":"2024-05-07T09:58:42Z","org_label-schema_license":"Elastic License","org_label-schema_name":"elastic-agent","org_label-schema_schema-version":"1.0","org_label-schema_url":"https://www.elastic.co/elastic-agent","org_label-schema_vcs-ref":"a2e31a1c99df431b43ee5058b2e603eeba5e0421","org_label-schema_vcs-url":"github.com/elastic/elastic-agent","org_label-schema_vendor":"Elastic","org_label-schema_version":"8.13.4","org_opencontainers_image_created":"2024-05-07T09:58:42Z","org_opencontainers_image_licenses":"Elastic License","org_opencontainers_image_ref_name":"ubuntu","org_opencontainers_image_title":"Elastic-Agent","org_opencontainers_image_vendor":"Elastic","org_opencontainers_image_version":"20.04","release":"1","summary":"elastic-agent","url":"https://www.elastic.co/elastic-agent","vendor":"Elastic","version":"8.13.4"},"name":"fleet-server"},"data_stream":{"dataset":"docker.container_logs","namespace":"default","type":"logs"},"ecs":{"version":"8.0.0"},"elastic_agent":{"id":"14794bc2-740f-4ec6-a233-3ca124727263","snapshot":false,"version":"8.13.4"},"event":{"dataset":"docker.container_logs"},"host":{"architecture":"x86_64","containerized":false,"hostname":"fleet-server","ip":["1.2.3.4"],"mac":["00-11-22-33-44-55"],"name":"fleet-server","os":{"codename":"focal","family":"debian","kernel":"6.1.79-Unraid","name":"Ubuntu","platform":"ubuntu","type":"linux","version":"20.04.6 LTS (Focal Fossa)"}},"input":{"type":"filestream"},"log":{"file":{"device_id":"46","inode":"6947321","path":"/var/lib/docker/containers/e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa/e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa-json.log"},"offset":7556662},"message":"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails\"},\"service.name\":\"filebeat\",\"ecs.version\":\"1.6.0\",\"ecs.version\":\"1.6.0\"}\n","stream":"stderr"}, Private:(*input_logfile.updateOp)(0xc001f351a0), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {"type":"x_content_parse_exception","reason":"[1:590] Duplicate field 'ecs.version'\n at [Source: (String)\"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client\"[truncated 99 chars]; line: 1, column: 590]","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'ecs.version'\n at [Source: (String)\"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client\"[truncated 99 chars]; line: 1, column: 590]"}}, dropping event!
Did this ever get fixed?
I'm currently facing the same issue.
Setup:
- Elastic Agent 8.13.4 running in a container as Fleet Server with Docker integration, shipping other containers logs
- Elasticsearch 8.13.4 as a container, producing logs
- Ingest pipeline to deserialize Elasticsearch container logs from JSON to root of the document
Elastic agent debug output:
[elastic_agent.filebeat][debug] Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2024, time.June, 9, 6, 28, 20, 310365774, time.UTC), Meta:{"input_id":"filestream-docker-021adcf5-730c-4adc-adbe-38ff0ec64b90-docker-e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa","raw_index":"logs-docker.container_logs-default","stream_id":"docker-container-logs-fleet-server-e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa"}, Fields:{"agent":{"ephemeral_id":"5c88f549-5de5-4b80-b5c7-0084e2777353","id":"14794bc2-740f-4ec6-a233-3ca124727263","name":"fleet-server","type":"filebeat","version":"8.13.4"},"container":{"id":"e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa","image":{"name":"docker.elastic.co/beats/elastic-agent:8.13.4"},"labels":{"description":"Agent manages other beats based on configuration provided.","io_k8s_description":"Agent manages other beats based on configuration provided.","io_k8s_display-name":"Elastic-Agent image","license":"Elastic License","maintainer":"infra@elastic.co","name":"elastic-agent","net_unraid_docker_managed":"dockerman","org_label-schema_build-date":"2024-05-07T09:58:42Z","org_label-schema_license":"Elastic License","org_label-schema_name":"elastic-agent","org_label-schema_schema-version":"1.0","org_label-schema_url":"https://www.elastic.co/elastic-agent","org_label-schema_vcs-ref":"a2e31a1c99df431b43ee5058b2e603eeba5e0421","org_label-schema_vcs-url":"github.com/elastic/elastic-agent","org_label-schema_vendor":"Elastic","org_label-schema_version":"8.13.4","org_opencontainers_image_created":"2024-05-07T09:58:42Z","org_opencontainers_image_licenses":"Elastic License","org_opencontainers_image_ref_name":"ubuntu","org_opencontainers_image_title":"Elastic-Agent","org_opencontainers_image_vendor":"Elastic","org_opencontainers_image_version":"20.04","release":"1","summary":"elastic-agent","url":"https://www.elastic.co/elastic-agent","vendor":"Elastic","version":"8.13.4"},"name":"fleet-server"},"data_stream":{"dataset":"docker.container_logs","namespace":"default","type":"logs"},"ecs":{"version":"8.0.0"},"elastic_agent":{"id":"14794bc2-740f-4ec6-a233-3ca124727263","snapshot":false,"version":"8.13.4"},"event":{"dataset":"docker.container_logs"},"host":{"architecture":"x86_64","containerized":false,"hostname":"fleet-server","ip":["1.2.3.4"],"mac":["00-11-22-33-44-55"],"name":"fleet-server","os":{"codename":"focal","family":"debian","kernel":"6.1.79-Unraid","name":"Ubuntu","platform":"ubuntu","type":"linux","version":"20.04.6 LTS (Focal Fossa)"}},"input":{"type":"filestream"},"log":{"file":{"device_id":"46","inode":"6947321","path":"/var/lib/docker/containers/e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa/e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa-json.log"},"offset":7556662},"message":"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails\"},\"service.name\":\"filebeat\",\"ecs.version\":\"1.6.0\",\"ecs.version\":\"1.6.0\"}\n","stream":"stderr"}, Private:(*input_logfile.updateOp)(0xc001f351a0), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {"type":"x_content_parse_exception","reason":"[1:590] Duplicate field 'ecs.version'\n at [Source: (String)\"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client\"[truncated 99 chars]; line: 1, column: 590]","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'ecs.version'\n at [Source: (String)\"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client\"[truncated 99 chars]; line: 1, column: 590]"}}, dropping event!
Hey @admlko,
Do you have the Elastic-Agent monitoring enabled? With the monitoring enabled the Elastic-Agent will collect its own logs without facing this issue parsing the JSON.
You can then use the conditions in the integration to exclude collecting the Elastic-Agent container logs, thus avoiding this indexing issue.
Did this ever get fixed? I'm currently facing the same issue. Setup:
- Elastic Agent 8.13.4 running in a container as Fleet Server with Docker integration, shipping other containers logs
- Elasticsearch 8.13.4 as a container, producing logs
- Ingest pipeline to deserialize Elasticsearch container logs from JSON to root of the document
Elastic agent debug output:
[elastic_agent.filebeat][debug] Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2024, time.June, 9, 6, 28, 20, 310365774, time.UTC), Meta:{"input_id":"filestream-docker-021adcf5-730c-4adc-adbe-38ff0ec64b90-docker-e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa","raw_index":"logs-docker.container_logs-default","stream_id":"docker-container-logs-fleet-server-e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa"}, Fields:{"agent":{"ephemeral_id":"5c88f549-5de5-4b80-b5c7-0084e2777353","id":"14794bc2-740f-4ec6-a233-3ca124727263","name":"fleet-server","type":"filebeat","version":"8.13.4"},"container":{"id":"e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa","image":{"name":"docker.elastic.co/beats/elastic-agent:8.13.4"},"labels":{"description":"Agent manages other beats based on configuration provided.","io_k8s_description":"Agent manages other beats based on configuration provided.","io_k8s_display-name":"Elastic-Agent image","license":"Elastic License","maintainer":"infra@elastic.co","name":"elastic-agent","net_unraid_docker_managed":"dockerman","org_label-schema_build-date":"2024-05-07T09:58:42Z","org_label-schema_license":"Elastic License","org_label-schema_name":"elastic-agent","org_label-schema_schema-version":"1.0","org_label-schema_url":"https://www.elastic.co/elastic-agent","org_label-schema_vcs-ref":"a2e31a1c99df431b43ee5058b2e603eeba5e0421","org_label-schema_vcs-url":"github.com/elastic/elastic-agent","org_label-schema_vendor":"Elastic","org_label-schema_version":"8.13.4","org_opencontainers_image_created":"2024-05-07T09:58:42Z","org_opencontainers_image_licenses":"Elastic License","org_opencontainers_image_ref_name":"ubuntu","org_opencontainers_image_title":"Elastic-Agent","org_opencontainers_image_vendor":"Elastic","org_opencontainers_image_version":"20.04","release":"1","summary":"elastic-agent","url":"https://www.elastic.co/elastic-agent","vendor":"Elastic","version":"8.13.4"},"name":"fleet-server"},"data_stream":{"dataset":"docker.container_logs","namespace":"default","type":"logs"},"ecs":{"version":"8.0.0"},"elastic_agent":{"id":"14794bc2-740f-4ec6-a233-3ca124727263","snapshot":false,"version":"8.13.4"},"event":{"dataset":"docker.container_logs"},"host":{"architecture":"x86_64","containerized":false,"hostname":"fleet-server","ip":["1.2.3.4"],"mac":["00-11-22-33-44-55"],"name":"fleet-server","os":{"codename":"focal","family":"debian","kernel":"6.1.79-Unraid","name":"Ubuntu","platform":"ubuntu","type":"linux","version":"20.04.6 LTS (Focal Fossa)"}},"input":{"type":"filestream"},"log":{"file":{"device_id":"46","inode":"6947321","path":"/var/lib/docker/containers/e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa/e6b585ee0d2972bb3a693df13bf1cb7879de6da4e3146fc42262c7ca45348bfa-json.log"},"offset":7556662},"message":"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails\"},\"service.name\":\"filebeat\",\"ecs.version\":\"1.6.0\",\"ecs.version\":\"1.6.0\"}\n","stream":"stderr"}, Private:(*input_logfile.updateOp)(0xc001f351a0), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {"type":"x_content_parse_exception","reason":"[1:590] Duplicate field 'ecs.version'\n at [Source: (String)\"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client\"[truncated 99 chars]; line: 1, column: 590]","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'ecs.version'\n at [Source: (String)\"{\"log.level\":\"warn\",\"@timestamp\":\"2024-06-09T09:28:20.306+0300\",\"message\":\"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.\",\"component\":{\"binary\":\"filebeat\",\"dataset\":\"elastic_agent.filebeat\",\"id\":\"filestream-default\",\"type\":\"filestream\"},\"log\":{\"source\":\"filestream-default\"},\"log.logger\":\"elasticsearch\",\"log.origin\":{\"file.line\":454,\"file.name\":\"elasticsearch/client.go\",\"function\":\"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client\"[truncated 99 chars]; line: 1, column: 590]"}}, dropping event!
Hey @admlko,
Do you have the Elastic-Agent monitoring enabled? With the monitoring enabled the Elastic-Agent will collect its own logs without facing this issue parsing the JSON.
You can then use the conditions in the integration to exclude collecting the Elastic-Agent container logs, thus avoiding this indexing issue.
Hi @belimawr,
Thanks for replying.
I have Elastic-Agent monitoring enabled and I can see the logs from the Agent.
I don't see excluding Elastic-Agent container logs as a solution because I want to be able to see and act on those logs when something goes wrong :)
I don't see excluding Elastic-Agent container logs as a solution because I want to be able to see and act on those logs when something goes wrong :)
The monitoring does not read the logs outputted from the container, it logs to a a file and uses a Filebeat to ingest this file, Docker itself does not even see those logs.
When the Elastic-Agent is running on a container environment it logs twice: once to stderr, so you can see the logs with docker logs
and to a file. The file is ingested by the monitoring Filebeat.
If you're using the docker integration to ingest logs from all containers you'll end up ingesting the Elastic-Agent logs twice. Hence my suggestion to exclude the Elastic-Agent container from the docker integration.
You will still have the logs collected by the monitoring Filebeat.
I don't see excluding Elastic-Agent container logs as a solution because I want to be able to see and act on those logs when something goes wrong :)
The monitoring does not read the logs outputted from the container, it logs to a a file and uses a Filebeat to ingest this file, Docker itself does not even see those logs.
When the Elastic-Agent is running on a container environment it logs twice: once to stderr, so you can see the logs with
docker logs
and to a file. The file is ingested by the monitoring Filebeat.If you're using the docker integration to ingest logs from all containers you'll end up ingesting the Elastic-Agent logs twice. Hence my suggestion to exclude the Elastic-Agent container from the docker integration.
You will still have the logs collected by the monitoring Filebeat.
Thank you for your clarification, makes sense.
Would it be possible to add this note to the docs (or did I miss it)?
@belimawr does this mean we can close this issue as fixed?
@belimawr does this mean we can close this issue as fixed?
No, we should fix it. While the JSON specification allows for duplicated fields, it is up to the implementation to decide how to handle it. The one ES uses, throws an error and does not parse the JSON by default, which causes issues. The index configuration/mapping we use for the monitoring logs circumvents it, so the issue is not super critical, however it's a annoying one.
It shouldn't be too hard to fix, I can think of a couple different approaches to go about it.
Please post all questions and issues concerning the Elastic Agent on https://discuss.elastic.co/c/beats before opening a Github Issue. Your questions will reach a wider audience there, and if we confirm that there is a bug, then you can open a new issue.
For security vulnerabilities please only send reports to security@elastic.co. See https://www.elastic.co/community/security for more information.
Please include configurations and logs if available.
For confirmed bugs, please report:
with a config of:
Elastic agent pod produces logs that contain (for example):
While this is technically allowed by the json RFC it states (https://www.rfc-editor.org/rfc/rfc8259#section-4) that: