Closed bvahdat closed 10 months ago
@bvahdat was asking me if this has been resolved. I understand that #6541 addresses sanitization, and wondered if it resolves this issue?
Same problem here, the app-insights doesn´t take the mask logs. I can see they are mask on pod logs but unmasked on app-insights
Thanks
Hi, I am facing the same problem. Is this solved please?
@sahil-goel @srferron @richorama @bvahdat Sorry for taking so long to get to this issue.
Can you try our TelemetryProcessor preview feature to mask sensitive data in log message: Here is a sample of the json config.
The whole test is #3408. Please let me know if this works for you and ping me if you have further questions.
Essentially, sensitive data can be extracted out as a Log attribute and then delete it on the fly. It will have {redactedKeyName} instead of "*****".
If this still doesn't meet your need, I can try to apply mask feature to log body. That will require feature improvement and a new release.
@heymas - what you explained don't really help
I have 100 microservices running in a AKS cluster and each connecting with the same application insights. Every microservice has its own configuration on what to mask in the logs. As of now, things are getting masked in AKS logs as per microservice configuration but when it comes to Application Insights logs, the fields are not masked.
Application insights.json is a centralised configuration and is not per microservice.
Ideally, I would want logs to be exactly same in AKS and App Insights
@sahil-goel how do you attach Java agent? at the microservice level or AKS cluster level?
https://github.com/open-telemetry/opentelemetry-java/issues/5187#issuecomment-1689923356
logstash-logback-encoder library offers some powerful configuration. can you give it a try?
This issue has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 7 days. It will be closed if no further activity occurs within 7 days of this comment.
Expected behavior
The trace logs provided by app-insights should reflect the effective container logs of a given application.
Actual behavior
To have custom dimensions for our structured logging we make use of:
https://github.com/logfellow/logstash-logback-encoder
And we configure it to mask out sensitive data as per documentation here:
https://github.com/logfellow/logstash-logback-encoder#identifying-field-values-to-mask-by-value
And it actually works pretty well as because looking into the container logs we see that sensitive fields inside JSON are properly masked out, for example the container log entries look like:
Notice the masked out fields which are:
But when we look at the corresponding trace logs of AppInsights in portal they are not masked out and show the sensitive data we are actually trying to mask out!
The corresponding
logback-spring.xml
of the spring-boot application is as the following (the used spring-boot profile at runtime is thedefault
profile and notlocal
):So to says from a timeline perspective, app-insights seems to infer the written logs after the
encoder
tag by the configuration above but before thejsonGeneratorDecorator
tag.To Reproduce
Steps to reproduce the behavior:
Sample Application
If applicable, provide a sample application which reproduces the issue.
System information
Please provide the following information:
3.2.11
,3.3.0
and3.3.1
having the same behaviour.2.7.2
3.18.1
Logs
Turn on SDK logs and attach/paste them to the issue. If using an application server, also attach any relevant server logs.
The
Turn on SDK logs
link above doesn't work for me.Screenshots
If applicable, add screenshots to help explain your problem.