Closed steve-torchia closed 6 years ago
As far as I know, when TelemetryClient is created programmatically via constructor, no sampling is involved. If ingestion sampling is not turned on on AI resource either, you might be throttled based on the Application Insights pricing option.
@SergeyKanzhelev any suggestion how to diagnose?
@steve-torchia there is no default sampling behavior. First, can you confirm that sampled data comes from EventFlow. Check the itemCount
property on traces coming via this output. It may be you are using this key to send some other telemetry.
If data sampled from this output - check whether you do anything with TelemetryConfiguration.Active
anywhere in code.
If not - the only possibility is ApplicationInsights.config
laying around... Maybe forgotten file in bin
folder?
@SergeyKanzhelev We are definitely sending only via EventFlow. It's the only conduit we have to send trace to AI. I do NOT see any applicationinsights.config|xml file anywhere nor do i see anything with TelemetryConfiguration.Active in the code.
with this query:
traces | where timestamp > ago(24h)
| summarize traces = sum(itemCount), dataPoints= count()
by bin(timestamp, 30min)
| sort by timestamp
I get the following results:
timestamp | traces | dataPoints |
---|---|---|
2017-11-08T22:00:00Z | 35594 | 35594 |
2017-11-08T21:30:00Z | 23247 | 23247 |
2017-11-08T21:00:00Z | 480747 | 434888 |
2017-11-08T20:30:00Z | 307390 | 121232 |
2017-11-08T20:00:00Z | 100145 | 72858 |
@steve-torchia yes, you are right. When itemCount
is not 1
- sampling happened.
I promise the are no magic. And sampling is not enabled by default. In fact it's defined in a separate assembly called Microsoft.AI.WindowsTelemetryChannel
or something like this.
Do you see this behavior when debug in VS or when deployed to production? Sorry for repeating the question - may be when you deploying - there are a config file left out from precious deployment?
If you can see this locally - can you check out Telemetry.Configuration.Active.TelemetryProcessors
collection. Just to make sure its empty.
@SergeyKanzhelev Found it.
There was a "rogue" .UseApplicationInsights()
call on one of our services' WebHostBuilders. With no ApplicationInsights.config file, it seems to default to turning sampling on.
Glad you were able to get to the bottom of this. It makes sense. UseApplicationInsights() will add a bunch of stuff to the configuration.
Using Microsoft.Diagnostics.EventFlow.Core (1.1.6) and Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsights (1.2.0)
We have a custom logger that outputs to both disk as well as ApplicationInsights via EventFlow.
What we are seeing in ApplicationInsights on Azure is the following message:
I checked our AI resource on Azure and Ingestion sampling is NOT turned on. I believe this means that there is some Adaptive sampling going on at the app level. From the AI documentation it seems like the only way to switch this off is to remove the AdaptiveSamplingTelemetryProcessor node from the ApplicationInsights.config file.
(reference: https://docs.microsoft.com/en-us/azure/application-insights/app-insights-sampling)
The strange thing is that we do NOT have a applicationinsights.config file anywhere and looking at the source code for ApplicationInsightsOutput.cs, it just instantiates up a TelemetryClient with no configuration in this case.
Does that mean that this is default behavior? Is there something we can put inside an ApplicationInsights.config file that would stop sampling?
Here is the eventFlowConfig.json file