Closed Fr33Radical closed 3 years ago
It sounds like an ingest node processor that expands the dots, similar to elastic/beats#20489 could help here.
Would that not require Beats to do so? By directly, I did not mean to send to elastic search. We are still sending to a queue and then logstash. By directly, I meant that it is the client application that would nest the JSON. Can java-ecs-logging handle it? Could it say something like if there is a dot in the key of StringMapMessage, then it would create a new JSON Object ignoring the dots in the value?
This cannot be efficiently handled by this library, especially if custom fields are added via MDC or through log4j2 message objects. There needs to be a post-processing step either with Beats, ingest node, or a custom script.
Hello, "Otherwise, the user doesn't know whether to access a field via doc["foo.bar"] or via doc["foo"]["bar"]. We don't want users to have knowledge about which fields are nested vs dotted as this is an implementation detail that can vary with different ecs-logging implementations and may even change for the same implementation."
This is a reference to issue #51 which is closed.
If we want to directly log in the ELK pipeline and that we can live with the fact that all dots are replaced for nested JSON object (no exception), then, in thoses conditions, is it possible to send nested JSON?
I would like to avoid filebeat for now and the plugin filter to de_dot is told to be CPU intensive. See : https://www.elastic.co/guide/en/logstash/current/plugins-filters-de_dot.html
Thank you for considering this option.