Open jsquire opened 3 years ago
@akozmic not sure if this will help, but because I don't want to / cannot use the work around suggested (requeue to topics is not possible I only own one subscription of many subscriptions in that topic, Defer require some kind of persistent memory to pick up processed message I don't want to have another solution for this solution) ===> I have kind of hackly get this to work for me.
Option 1: If you don't have and state management, you can just simply catch the exception, and don't COMPLETE the message, which SHOULD NOT (not sure 100% but will test) take up any threads, and let the message lock duration expire on it own before retry will happen... I find this behavior very strange, because the message lock duration is set to 5 minutes (max is 5 minutes), but consistently I see retry every 2.5 minutes.
Option 2: In my case, there is state management needed in retry. For example: each work processing have to do step A, B, then C.
Cons of this hack:
@KimberlyPhan thank you for the suggestions. I actually tried Option 1 already and I found if I would just catch the exception and swallow it, it didn't look like it was updating the DeliveryCount on the message. The lock on the message would expire and then retry after the designated amount of time, but with DeliveryCount=0. It really seemed like it was entering an infinite loop.
Also I do apologize, i did not see that this feature was actively being worked on from a previous message and should launch soon based on the estimate. I look forward to it.
@akozmic I see, we use these below crit to put the message to DLQ, then just a matter of max retry time * 2.5 minutes.
var timeInQueue = DateTimeOffset.UtcNow.Subtract(args.Message.EnqueuedTime).TotalMinutes; if (timeInQueue > maxTimeToProcessInMinutes) { await args.DeadLetterMessageAsync(message, $"TimeInQueue exceeds limit {maxTimeToProcessInMinutes}"); }
Is this feature going to be implemented in the java sdk too? Would highly appreciate it
Is this feature going to be implemented in the java sdk too? Would highly appreciate it
When the service adds the operation, the official Azure SDK packages will support it, including our Java libraries.
Thank you for your feedback on this item. We are currently doing active development on this feature, and expect to have more to share around its release in the next couple of months.
@EldertGrootenboer it's been 5 months since your last update. How is this feature going, can we expect it anytime soon? 🙏🏻
Yup, I hope we get this soon. 🙏🏻
Quite recently had to make stability and correlation fixes to our existing customized cob-web solution and would really appreciate this as a native feature.
We are currently doing active development on this feature, and expect to have more to share around its release in the next couple of months.
Thanks @EldertGrootenboer . Would also be great to make sure to have abandon message with custom delay supported in Python Client as well.
We also need this feature badly. Hope it will be implemented soon.
@EldertGrootenboer would you have any updates to share? This is such a critical functionality it would help a ton of customers
@EldertGrootenboer Shouldn't need a separate queue for retry. Should be able to retry time when deferring.
Due to the lack of this feature I had to reinvent the wheel, maybe this is helpful for someone else:
Top level catch where I am consuming messages, adjust to your needs. Note that this is not transactional, so there is room for improvement. One option to handle this is to not schedule a new message if the service bus-managed delivery count is not 1
.
You might also want to check the application-managed DeliveryCount
property to determine if the message should be sent to the DLQ or not. I am managing this elsewhere.
catch (Exception e)
{
// Clone the original message
var clonedMessage = new ServiceBusMessage(message);
// Complete the original message
// The new message will have a new sequence number
await messageActions.CompleteMessageAsync(message, ct);
// Adjust our application-managed DeliveryCount
// Note that this is _not_ incremented by the service bus
clonedMessage.IncrementApplicationManagedDeliveryCount();
// The message will be scheduled to be retried in 15 * DeliveryCount seconds
await using var sender = _serviceBusClient.CreateSender(Environment.GetEnvironmentVariable("MyQueueName"));
// Adjust to any backoff policy that fits you.
var scheduledEnqueueTime = DateTimeOffset.UtcNow.AddSeconds(15 * clonedMessage.GetApplicationManagedDeliveryCount()!.Value);
await sender.ScheduleMessageAsync(clonedMessage, scheduledEnqueueTime, ct);
}
ServiceBusMessageExtensions.cs
public static void IncrementApplicationManagedDeliveryCount(this ServiceBusMessage message)
{
if (message.ApplicationProperties.TryGetValue(nameof(ServiceBusReceivedMessage.DeliveryCount), out var deliveryCount))
message.ApplicationProperties[nameof(ServiceBusReceivedMessage.DeliveryCount)] = (int)deliveryCount + 1;
else
message.ApplicationProperties.Add(nameof(ServiceBusReceivedMessage.DeliveryCount), 1);
}
public static int? GetApplicationManagedDeliveryCount(this ServiceBusMessage message)
{
if (message.ApplicationProperties.TryGetValue(nameof(ServiceBusReceivedMessage.DeliveryCount), out var deliveryCount))
return (int)deliveryCount;
return null;
}
Due to the lack of this feature I had to reinvent the wheel, maybe this is helpful for someone else:
@jokarl Notice that this is not really a retry for the message, but essentially a new message as stated previously in this thread.
You have catched the persistance issue with DeliveryCount, but it seems like it is not the only metadata that is lost in the cloning process like this. Haven't gone this through prop by prop, but for example if the message is a forwarded DLQ entry, I think the dead letter reason and description data is lost also.
@AlexEngblom you are right about that metadata is lost, perhaps I am not fulfilling the requirements asked in the issue. You can see which properties are ignored here.
We are currently doing active development on this feature, and expect to have more to share around its release in the next couple of months.
Is there any news already on when we could expect this feature ?
@EldertGrootenboer as it's been quiet for quite some time on this topic, I'd like to inquire what the status is of the development of this feature, as you mentionned a while ago that it's being actively developed. Can we expect this feature in the foreseeable future or should we start implementing our own workaround as this is something that we really need (on multiple projects). Thanks.
We are also patiently waiting for this feature.
@EldertGrootenboer as it's been quiet for quite some time on this topic, I'd like to inquire what the status is of the development of this feature, as you mentionned a while ago that it's being actively developed. Can we expect this feature in the foreseeable future or should we start implementing our own workaround as this is something that we really need (on multiple projects). Thanks.
Development for this feature is underway, with an initial release expected in Q1CY25. We are aware this is a highly anticipated feature, and thank you for your patience as we continuously balance the different priorities of the various requests we receive.
@EldertGrootenboer Thanks for the update - please don't forget this , number 1 feature for the past few years - would save a LOT of time and effort.
Issue Transfer
This issue has been transferred from the Azure SDK for .NET repository, #9473.
Please be aware that @@Bnjmn83 is the author of the original issue and include them for any questions or replies.
Details
This is still a desired feature and totally makes sense in many scenarios. Is there any effort to implement this in the future?
Original Discussion
@msftbot[bot] commented on Tue Jan 14 2020
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @jfggdl
@msftbot[bot] commented on Tue Jan 14 2020
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @jfggdl
@jsquire commented on Tue Jan 14 2020
@nemakam and @binzywu: Woudl you be so kind as to offer your thoughts?
@nemakam commented on Tue Jan 14 2020
@Bnjmn83, This is a feature ask that we could work in the future, but we don't have an ETA right now. As an alternate solution, you can implement this yourself on the client using the transaction feature. Essentially, complete() the message and send a new message with appropriate "scheduleTime" within the same transaction. That should behave similarly.
@axisc commented on Thu Aug 13 2020
I think @nemakam's recommendation of completing the message and sending a scheduled message is a better approach.
Service Bus (or any message/command broker) is a server/sender side cursor. When a receiver/client wants to control when the message is visible again (i.e. custom delay/retry) it must take over the cursor from the sender. This can be achieved with the below options -
Do let me know if this approach is too cumbersome and we can revisit. If not, I can close this issue.
@mack0196 commented on Wed Mar 31 2021
If the subscription\queue has messages in there, will the scheduled message 'jump to the front of the line' at its scheduled time?
@ramya-rao-a commented on Mon Nov 01 2021
@shankarsama Please consider moving this issue to https://github.com/Azure/azure-service-bus/issues where you track feature requests for the service