Closed heckad closed 3 years ago
Hello,
I am not sure I understand what you mean, could you please give some more information?
When trying to push a message that has a duplicated header rabbit return error, we want that rabbet return Ok and don't push message to a queue. This behaviour can be switch by x-mode
header with values decline and ignore (for example). Decline now behaviour and ignore new.
@noxdafox, do you understand what I propose? Is it hard to do?
It is not clear what you are trying to achieve and how. The plugin does not control directly what is communicated back to the clients.
How are you publishing the message? Which client library are you using? What are the parameters you are using when publishing the messages? Which type of de-duplication mechanism are you using (exchange
/queue
)?
1) We publish many messages in a transaction.
2) We use aio_pika
3) Just add x-deduplication-header
with value where value is a URL.
4) We use a queue.
We want that crawler to crawl only unique URLs. If I publish a message with a duplicated value we got an error PRECONDITION_FAILED - partial tx completion
. without any additional information. We just want those messages with duplicated headers don't add to queue and client just continue work. It likes set.add
behaviour. If a value is already in a set then we do nothing.
What you are trying to achieve does not pertain to the plugin itself but RMQ internal semantics.
What you are asking is to introduce a higher level of granularity when it comes to the broker response to a message publication attempt. Right now the broker can only confirm or deny whether a message was placed into a queue. It does not consider the possible reasons of why such an event did not happen.
From the plugin point of view, the only thing we notify to the broker is whether the message is a duplicate or not. It's the broker that decides (based on the channel parameters) what to respond to the client.
On normal scenarios, the client ignores whether a publication was successful or not. With transactions and publisher confirmation the client explicitly asks to the broker to confirm whether one or more messages were successfully published into a queue.
In other words, what you are asking for cannot be controlled through this plugin. It's the rabbitmq-server logic which requires changes. More specifically, we should first extend the AMQP protocol to support such semantics and then implement them into the broker.
From the plugin point of view, the only thing we notify to the broker is whether the message is a duplicate or not. It's the broker that decides (based on the channel parameters) what to respond to the client.
Can we tell the broker that everything is okay, and not put the message in the queue? Or make a hack with TTL. Push message with ttl = 0.
We are not telling the broker whether things are Ok or not. We are telling it whether the message is a duplicate or not. The broker decides how to proceed.
Lying to the broker would lead to its belief that there are more messages in the queue. This would cause several issues leading to an unstable logic. Misusing the TTL mechanism would cause similar issues.
Few possible examples in mind:
max-length
would end up dropping messages out the queues even if they are not fullThe only solution is not to rely on message transactions and publish the messages without requesting confirmation from the broker.
A feasible workaround for your problem could be:
@noxdafox, thanks for a good explanation. We have stopped using transactions.
Np, hope it was useful.
Hi everybody.
Propose to add working mode when messages with the duplicated header just don't put in a queue, but to client returns that all good.