Open MattGal opened 5 years ago
Adding this for context: https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.trace.autoflush?view=netframework-4.8
This was a pretty unexpected change in behavior, especially since the sample of using this trace listener includes setting this value to true... it will probably break a lot of production servers (like it did ours for 5 hours while we struggled to diagnose the problem with no logs available, since it was the logging framework that was behaving strangely). I already opened an issue on the the relevant sample that caused us to set atuoFlush to true in the first place. (https://github.com/MicrosoftDocs/azure-docs/issues/37662)
Thanks for reporting!
Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.
This issue is stale because it has been open 300 days with no activity. Remove stale label or comment or this will be closed in 7 days.
Thanks for reporting!
Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.
@cijothomas have you had any chance to investigate more in the past couple years?
Thanks for reporting! Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.
@cijothomas have you had any chance to investigate more in the past couple years?
No. the doc change was made to prevent people from accidently setting auto-flush. No investments/investigations were done in logging adapters for a long time, as most investments were in supporting ILogger
based logging.
This issue is stale because it has been open 300 days with no activity. Remove stale label or this will be closed in 7 days. Commenting will instruct the bot to automatically remove the label.
Thanks for reporting! Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.
@cijothomas have you had any chance to investigate more in the past couple years?
No. the doc change was made to prevent people from accidently setting auto-flush. No investments/investigations were done in logging adapters for a long time, as most investments were in supporting
ILogger
based logging.
Another year, another reply to keep this issue alive since as far as I can tell no one is claiming it's resolved.
No work has been done in any logging adapters except ILogger for a long time.
This issue is stale because it has been open 300 days with no activity. Remove stale label or this will be closed in 7 days. Commenting will instruct the bot to automatically remove the label.
OK, I give up, I will let it be closed with no fix.
This commit: https://github.com/microsoft/ApplicationInsights-dotnet-logging/commit/1a31e7b59954bb8f3f00a8855750205a3e36b709
... when combined with a System.Diagnostics.Trace configuration where Autoflush is set to true caused our service to Flush() on every Trace.Write* call, which rapidly made our Azure Cloud Service unusable.
We encountered this when upgrading from 2.4 to 2.10 version of the libraries used. The Autoflush setting ends up calling the Telemetry client flush function on every call, leading to rapid network functionality loss and/or writing too much telemetry to disk, filling said disk, and dying.
We're unblocked on the .NET Engineering side of things here, but it'd be useful to try to play nicely with this setting.