Open belimawr opened 1 month ago
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)
Another option is to move the log entry inside the fail_on_error
check, so we only log in case the processor actually failed.
Here is where we're logging it: https://github.com/elastic/beats/blob/f6b8701e8c8034836becb9ccaf3f4b2449fc589f/libbeat/processors/actions/copy_fields.go#L81C4-L87
@belimawr I believe the change in beats is a better change, seems weird that a fail_on_error: true
would flood the logs.
@belimawr I believe the change in beats is a better change, seems weird that a
fail_on_error: true
would flood the logs.
Did you mean fail_on_error: false
? I'd expect the processor to be verbose if it fails so users can easily know something is not working as expected.
Even with fail_on_error: false
it fails on every event, hence, it floods the logs. Regardless of changing when/how it logs, I believe the Elastic-Agent should not be relying on processors silently failing when trying to populate some fields, we have mechanisms to avoid this, and I believe they should be used.
For confirmed bugs, please report:
The
filestream-monitoring
deployed by Elastic-Agent to collect its own logs is flooding the event logs with ancopy_fields
processor error:On my test, that accounted for about 38% of the entries in the event log.
This seems to be coming from the following chain of processors (from
components/filestream-monitoring/beat-rendered-config.yml
):The third and forth processors copy different fields to the same destination, so if the third runs successfully, then the forth will always fail and generate the log message above.
While this can be the intended behaviour (trying to set
data_stream.dataset
from multiple sources), it is flooding our logs.We can use some processor conditions to avoid running the
copy_fields
processor if the field is already present, thus avoiding the flood of debug logs.Steps to reproduce
logs/elastic-agent-8.15.0-25075f/events/*.ndjosn
)