JDetmar / NLog.Extensions.AzureStorage

NLog Target for Azure Storage. Uses NLog batch write to optimize writes to Storage.
MIT License
31 stars 19 forks source link

Not handling BlockCountExceedsLimit when using BlobStorageTarget #91

Closed cowlinb6 closed 1 year ago

cowlinb6 commented 3 years ago

Hi

After a bit of advice on how to deal with this.

When using the BlobStorageTarget there is a limit of 50,000 appends to each blob. When this limit is reached logging just stops and does not recover unless the BlobName changes.

The response back from the AppendBlockAsync request when completed comes back with:

Azure.RequestFailedException HResult=0x80131500 Message=The committed block count cannot exceed the maximum limit of 50,000 blocks. RequestId:fb841d35-001e-00fa-6417-01098b000000 Time:2021-02-12T08:16:14.1048795Z Status: 409 (The committed block count cannot exceed the maximum limit of 50,000 blocks.) ErrorCode: BlockCountExceedsLimit

I've handled this by using a ContinueWith on the AppendBlockAsync call:

return blob.AppendBlockAsync(stream, cancellationToken: cancellationToken).ContinueWith(async (response) => await HandleAppendBlockFailure(response), TaskContinuationOptions.OnlyOnFaulted);

Then within the HandleAppendBlockFailure method I change the BlobName to have a post fixed version. As this then uses a new blob logging then continues.

This seems to resolve the issue but is this the appropriate way to handle it? I can provide more sample code and / or PR if needed.

Regards

Ben

snakefoot commented 3 years ago

The BlobTarget use TaskDelayMilliseconds="200" by default, so it will have a maximum of 5 append/sec. This means 18000 appends/hour.

If you configure the BlobTarget to create a new blob every hour, then you should not get near the limit of 50000 appends:

blobName="my_app/${date:universalTime=true:format=yy-MM-dd_HH}.log"

cowlinb6 commented 3 years ago

The issue with doing that is that a new log file file will be generated every hour. As we have a significant number of log files and some of them are relatively small then it makes it more difficult to find what you need.

snakefoot commented 3 years ago

Maybe one could make a "trick" with using cached=true like this:

blobName="my_app/${date:universalTime=true:format=yy-MM-dd_HH_mm_ss:cached=true}.log"

And when the BlobTarget experience an error with BlockCountExceedsLimit then it will re-initialize the BlobName-layout, and attempt to render a new blobname. The cached=true will discard its cached value on re-initialize, and wil then generate a new value (and keep that value until next "failure")

Then the BlobTarget will only move to the next-hour when the blob is failing. Not sure if that will work for you? (This will require a code change in the BlobTarget, but it should not be too difficult)

cowlinb6 commented 3 years ago

I take it that would stop the log rolling over at all until there is a failure? We currently structure them by day.

snakefoot commented 3 years ago

@cowlinb6 You are right. You want it to roll on 2 events:

Then it starts to get a little funny, where it should explicit re-initialize on both events:

Then this would work:

And give the following blob-names:

cowlinb6 commented 3 years ago

Just tried to test this but it doesn't generate a new file and silently fails as before. This is blob name that I'm using:

snakefoot commented 3 years ago

@cowlinb6 Sorry to have mislead you. Thought we were discussing how one could implement support for your scenario:

Both are just suggestions, and none of them have been implemented. But I guess you are telling me, that the generic approach suggested by me could work for you?

cowlinb6 commented 3 years ago

@snakefoot my misunderstanding there, just assumed this was already there. Yes this would cover my scenario.

snakefoot commented 3 years ago

Created #93 with support for rolling with help from cached=true. You can try the pre-release nuget-package here:

https://ci.appveyor.com/project/JDetmar/nlog-extensions-azurestorage/builds/37781353/artifacts

Just download NLog.Extensions.AzureBlobStorage.3.1.0.nupkg and put in a local nuget-package-folder for testing.

cowlinb6 commented 3 years ago

@snakefoot I've just tried this (not from Nuget but from source) and it doesn't seem to handle the exception.

image

I think this maybe due to in the AppendFromByteArrayAsync method the AppendBlockAsync call is not awaited.

snakefoot commented 3 years ago

The PullRequest will not prevent the exception from occurring. It will just attempt to recover by generating a new BlobName, and then retry the write-operation.

cowlinb6 commented 3 years ago

Yes it didn’t recover, just kept throwing the same exception.

Regards


From: Rolf Kristensen notifications@github.com Sent: Monday, February 15, 2021 5:29:05 PM To: JDetmar/NLog.Extensions.AzureStorage NLog.Extensions.AzureStorage@noreply.github.com Cc: Ben Cowling ben.cowling@xarios.com; Mention mention@noreply.github.com Subject: Re: [JDetmar/NLog.Extensions.AzureStorage] Not handling BlockCountExceedsLimit when using BlobStorageTarget (#91)

The PullRequest will not prevent the exception from occurring. It will just attempt to recover by generating a new BlobName, and then retry the write-operation.

Sent from my Sony Xperia

---- Ben Cowling wrote ----

@snakefoothttps://github.com/snakefoot I've just tried this (not from Nuget but from source) and it doesn't seem to handle the exception.

[image]https://user-images.githubusercontent.com/12410874/107976464-83704680-6fb1-11eb-8424-8cbc8077a2c5.png

I think this maybe due to in the AppendFromByteArrayAsync method the AppendBlockAsync call is not awaited.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/JDetmar/NLog.Extensions.AzureStorage/issues/91#issuecomment-779358088, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACXZ7HCKD6F6DGKJRQMIS5DS7FJOTANCNFSM4XQLQNBA.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/JDetmar/NLog.Extensions.AzureStorage/issues/91#issuecomment-779363014, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AC6V76TA6T4EFB4YMLHISJLS7FKWDANCNFSM4XQLQNBA.

Click herehttps://www.mailcontrol.com/sr/HtpgGp4qHfbGX2PQPOmvUs3oG7lEfVxC9LVL-WCPzohq0lC_iHVU98oA4aVkVYFu4oD75In8jLWoZTw26POg4w== to report this email as spam.

snakefoot commented 3 years ago

Unless you can find out where the bug is, then I guess you just have to show me the layout that you have configured. Then I can try and debug why it is failing. My unit-test has no issue with rolling-on-error.

snakefoot commented 3 years ago

@cowlinb6 Have you found a different solution, so the pull-request is no longer relevant?

cowlinb6 commented 3 years ago

Haven't had chance to test more on this yet. It doesn't work as it is for this scenario, so the PR can abandoned. I'll try and do some more testing and let you know. Thanks anyway.