rabbitmq / amqp091-go

An AMQP 0-9-1 Go client maintained by the RabbitMQ team. Originally by @streadway: `streadway/amqp`
Other
1.46k stars 134 forks source link

deadlocks in `Channel.call(...)` #253

Open jxsl13 opened 6 months ago

jxsl13 commented 6 months ago

Describe the bug

Hi, I'm (still) developing a wrapper for this library. I'm trying to properly implement flow control and context handling and trying to test as much as possible, simulating connection loss and more.

For one of my tests I have a rabbitmq which is out of memory upon startup which triggers the connection blocking state.

This state seems to trigger some weird deadlocks or something along the lines in this library making this select statement block "forever": https://github.com/rabbitmq/amqp091-go/blob/a2fcd5b1a96e2eb90317af7d2983d04e4e49b558/channel.go#L181-L205

I have seen Channel.Close() and Channel.UnbindQueue(...) block "forever". The blocking of Channel.UnbindQueue(...) is reproduced in the test below.

Might be related to #225 (it might be possible to reproduce "turn off the internet" with the tool that I use for my tests that's called toxiproxy)

Reproduction steps

Here is a test that reproduces the problem:

level=info, msg=creating connection,
level=info, msg=registering flow control notification channel,
level=info, msg=creating channel,
level=info, msg=registering error notification channel,
level=info, msg=registering confirms notification channel,
level=info, msg=registering flow control notification channel,
level=info, msg=registering returned message notification channel,
level=info, msg=declaring exchange,
level=info, msg=declaring queue,
level=info, msg=binding queue,
level=info, msg=publishing message,
level=info, msg=unbinding queue,  (blocks here forever)

Expected behavior

QueueBind worked, so I guess QueueUnbind should also work. I think this behavior can be triggered for nearly every method of Channel.

Additional context

Should not be relevant but could: darwin/arm64 macOS 14.3.1

lukebakken commented 6 months ago

Thanks for the report and the steps to reproduce this issue. I can reproduce it. As you noted, it requires a blocked RabbitMQ to reproduce.

amotzte commented 5 months ago

I'm pretty sure I got the something similaron QueueBind. Calling to QueueBind with noWait=false and getting stuck forever. Would it make sense to add some timeout for this operation ?

sedhossein commented 1 month ago

Lack of timeout feature felt in all pkg methods. I saw almost all of the XxxWithContext methods ignoring context! As mentioned initialization steps: func (ch *Channel) call(...). I think it's better to remove all XxxWithContext methods when you want to ignore all contexts. Or it's better to have a minimal version to support context timeout, in the future, we will improve them to support canceling the process. Currently, I face a lot of deadlocks in code. Is it acceptable for first version?

func (ch *Channel) PublishWithContext(ctx context.Context, exchange, key string, mandatory, immediate bool, msg Publishing) error {
    err := make(chan error)
    go func() {
        err <- ch.Publish(exchange, key, mandatory, immediate, msg)
    }()
    select {
    case <-ctx.Done():
        return fmt.Errorf("context cancelled")
    case e := <-err:
        return e
    }
}
jxsl13 commented 1 month ago

Implementing correct context cancelation is not trivial. Also no, it is no acceptable to introduce more problems than there currently are for a library that is used in production environments.

Also your example is not correct, because you will (potentially) be leaking goroutines upon context cancelation.