Closed babinecm closed 9 months ago
Hi @babinecm,
Each queue created with management API will behave as durable. There is no easy way to change this. What is your use case?
Thanks, Havret
Hi,
In our organization we have anycast queue named "event-topic". I don't have access to MQ server, so I don't see the settings. I can only see the list of currently available queues.
When I try to connect by AddConsumer("event-topic", RoutingType.Multicast, "my-queue", handler) it will always create anycast queue with address "event-topic" and queue name "my-queue". It's always anycast but I need to be multicast. It's ignoring the routing type parameter and it's not durable (it disappears from the list of available queues after few minutes when I disconnect).
I create durable multicast queue only if I call AddSharedDurableConsumer("event-topic", "my-queue"). But it will create a queue with the name "my-queue:global".
If I call AddConsumer() without queue name, it will create some temporary multicast queue with a guid that always changes if I reconnect.
My goal is to create durable multicast queue at "event-topic" address with the name "my-queue".
Can you confirm that EnableQueueDeclaration
and EnableAddressDeclaration
are included in your app setup, similar to the example in the sample app?
Artemis automatically adds a 'global' postfix when you create a subscription using AddSharedDurableConsumer
. To disable this, you need to modify the broker configuration as demonstrated in the broker.xml.
I hope that helps, Havret
Thank you for your suggestion.
EnableQueueDeclaration
or EnableAddressDeclaration
or both, I get error and the client won't connect at all.ArtemisNetClient: Failed to send the message
Cannot invoke "org.apache.activeng.artemis.core. transaction.Transaction.markAsRollbackOnly (org.apache.activemg.artemis.api.core.ActiveMException)" because "x" is null
at ActiveM.Artemis.Client.RequestReplyClient.<SendAsync>d__12.MoveNext()
at ActiveM.Artemis.Client.AutoRecovering.AutoRecoveringRequestReplyClient.<SendAsync>d__7.MoveNext()
at ActiveM.Artemis.Client.TopologyManager.<SendAsync>d__20.MoveNext()
at ActiveM.Artemis.Client.TopologyManager.<GetQueueNamesAsync>d__9.MoveNext()
at ActiveM.Artemis.Client.Extensions.DependencyInjection.ActiveMqTopologyManager.<CreateTopologyAsync>d__6.MoveNext()
at ActiveM.Artemis.Client.Extensions.DependencyInjection.ActiveMqTopologyManager.<CreateTopologyAsync>d__6.MoveNext()
at ActiveM.Artemis.Client.Extensions.DependencyInjection.ActiveMqClient.<StartAsync>d__5.MoveNext()
What version of ActiveMQ Artemis are you on? Maybe your app doesn't have access to the activemq.management queue? You need this for the TopologyManager
to set up addresses and queues when it starts, based on your config.
If you can get this changed, just using AddSharedDurableConsumer
ought to do the trick for what you're trying to do."
I tried to update ActiveMQ.Artemis.Client.Builders.ConsumerBuilder class, just as a proof of concept.
...
private static Symbol[] GetCapabilities(ConsumerConfiguration configuration) => new[] { Capabilities.Multicast };
...
private static uint GetTerminusDurability(ConsumerConfiguration configuration) => TerminusDurability.UnsettledState;
private static string GetReceiverName(ConsumerConfiguration configuration) => configuration switch
{
{ Shared : true } => configuration.Queue,
{ Queue : not null and not "" } => configuration.Queue,
_ => Guid.NewGuid().ToString()
};
I called AddConsumer("event-topic", RoutingType.Multicast, "my-queue", handler) and it worked. I can now see "my-queue" as multicast in Artemis browser UI and is also durable.
So, how to achieve this with the consumer configuration? I've tried many configurations but nothing works except the "hard edit" described above.
Ah, I see what you've done here. You are using FQQN to attach to your queue. The first line you included here is what made it work. When I was initially implementing ArtemisNetClient, I tried to make this possible, but the broker didn't support it at the time. I attempted to add support for this in this pull request, but I was encouraged to use the management API instead (which was my initial suggestion to you). However, it seems that after all these years, the creation of queues based on FQQN finally works. I don't see any reason why it shouldn't be compatible with ArtemisNetClient now. I'll make the necessary adjustments.
Please check out version 2.15.0-preview1, which should be available on NuGet soon.
I've tried nuget v2.15.0-preview1 and the routing type now works. Thank you for quick response. But queue is not durable.
Can you please add Durable = consumerOptions.Durable
in the same place as you added the RoutingType = routingType
?
Default behavior won't change and we can be able to set the consumer as durable as I did here
Durability is determined by ConsumerConfiguration in ConsumerBuilder class:
private static uint GetTerminusDurability(ConsumerConfiguration configuration) => configuration switch
{
{ Durable: true } => TerminusDurability.UnsettledState,
_ => TerminusDurability.None
};
+1 We are also looking for an option to make consumers durable using the consumer options.
Kind regards,
Jacob
I've just run a quick check against Artemis v2.20.0, and the queue appears to have been created as durable. I know this is a minor change to make, but I would prefer to make it only if necessary. I would need to rethink the API since ConsumerOptions is also used for AddSharedConsumer and AddSharedDurableConsumer. Having these options available there wouldn't make much sense.
If you don't want to change ConsumerOptions, then you can add a new AddDurableConsumer extension method to ActiveMqArtemisClientDependencyInjectionExtensions.cs. Here you can set Durable = true
.
Is this acceptable to you?
However, the question remains: do we really need it? As I mentioned in the comment above, I've verified that in the current build against Artemis v2.20.0, the queue is created as durable without the need to change TerminusDurability.
You can try it yourself:
Setup Artemis with version 2.20.0:
docker run -d --name activemq-artemis -p 5672:5672 -p 8161:8161 havret/dotnet-activemq-artemis-client-test-broker:2.20.0
And run slightly modified version of example application
Just comment out the lines, and run the app.
I see, but it doesn't work in our environment. Maybe it's some configuration on the server, but I don't have access to the server configuration. I can only see the list of available queues.
I did the following tests:
Durable = true
as described above. Connect to the server. The queue was created. Disconnect from the server. The queue remained on the server and is still there.For some reason I have to explicitly tell the server to create a durable queue, otherwise the queue will be deleted after some time.
I will really appreciate it if you add the option to explicitly define the queue durability.
It would be worthwhile to check your server configuration. Could you reach out to your Artemis Administrator and ask them to share the broker.xml file you guys are using?
There are a few settings that control when Artemis may delete queues and addresses:
<auto-delete-queues>true</auto-delete-queues>
<auto-delete-addresses>true</auto-delete-addresses>
<auto-delete-queues-message-count>0</auto-delete-queues-message-count>
<auto-delete-queues-delay>1000</auto-delete-queues-delay>
If I apply this configuration to my local instance of Artemis, the broker will remove the queue regardless of the change you're suggesting (Durable = true).
That being said, I'd prefer to get to the root cause of why the client and broker behave as they do, rather than making haphazard changes to the library that may seem to work but could blow up at the most unexpected moment in production because our understanding of the root cause was too superficial.
I hope that makes sense.
Here is the response from our Artemis Administrator.
The current version of Artemis MQ is 2.31.2. With the restart of brokers, it deletes all addresses that are not saved in the configuration and no app is connected to the given address. Client connection to management API is disabled.
Configuration from broker.xml:
<auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> <config-delete-queues>FORCE</config-delete-queues> <config-delete-addresses>FORCE</config-delete-addresses>
Thanks for getting back to me with the config. From what I can see, the problem lies in the missing options:
<auto-delete-queues>false</auto-delete-queues>
<auto-delete-addresses>false</auto-delete-addresses>
<auto-delete-created-queues>false</auto-delete-created-queues>
These are set to true
by default. So, your addresses and queues will be automatically removed if there are no attached consumers, producers, or if there are no unconsumed messages in the queues. My guess is that the latter was giving you the impression that sometimes the queue is not removed and sometimes it is. It was not driven by the client setting but by the outstanding messages in the queue. The settings config-delete-queues
and config-delete-addresses
deal only with config-created addresses/queues, meaning queues and addresses that are explicitly defined in your broker.xml
file. For instance, you can define a queue in the following way:
<address name="test-address">
<anycast>
<queue name="test-queue">
<durable>true</durable>
</queue>
</anycast>
</address>
If you remove it from the config file, it won't be automatically removed from your broker topology unless you explicitly set <config-delete-queues>FORCE</config-delete-queues>
. The same applies to addresses.
In summary, please ask your Artemis Administrator to disable the auto-delete queues and addresses options, and everything should work fine (no queues should be removed). Another option is to ask them to explicitly define your queues as part of the broker configuration.
I hope that helps, Havret
Thanks for the help and explanation. I don't know how it works in detail. Now I see that the problem is in the server configuration. I'll try to ask our Artemis Administrator if this behavior can be changed.
At least, we fixed the routing type when using FQQN #472.
Thanks again. So I'm closing this issue.
Feature description
Feature in action
Describe alternatives you've considered
Additional context