Currently, when sending and consuming messages, we use eventsCh to receive command requests for message sending and receiving:
func (p *partitionProducer) runEventsLoop() {
for {
select {
case i := <-p.eventsChan:
switch v := i.(type) {
case *sendRequest:
p.internalSend(v)
case *flushRequest:
p.internalFlush(v)
case *closeProducer:
p.internalClose(v)
return
}
case <-p.connectClosedCh:
p.reconnectToBroker()
case <-p.batchFlushTicker.C:
if p.batchBuilder.IsMultiBatches() {
p.internalFlushCurrentBatches()
} else {
p.internalFlushCurrentBatch()
}
}
}
}
This looks OK under normal circumstances, but in extreme cases they may affect each other. For example, for send command, we have a parameter of maxPendingMessages locally, which is used as the size of the eventsCh cache(default: 1000):
Assuming that the local maxPendingMessages reaches the default threshold at this time, then the channel will enter a blocking state (in fact, this is completely possible), at this time related flush or close requests will also be blocked by this eventsCh, because they use the same channel(eventsCh).
Once runEventsLoop is blocked, it will cause the following series of problems:
send message TimoutError
When a certain block action is triggered, this phenomenon will continue to exist until the Go SDK is restarted.
the receiveCommand() stuck and the gorutine is gopark
In this case, when we try to bin/pulsar-admin topics unload this topic, it doesn't work, because in fact it is the block inside the Go SDK that caused the sending timeout. At this point, assuming that the Go SDK is restarted, this service can be restored immediately.
So, Here I want to discuss whether we can split eventsCh, and each request corresponds to an independent Go Channel. This has two advantages:
There will be no mutual influence between the multiple commands discussed above.
The threshold limit for maxPendingMessages is more precise. Because currently multiple requests may be sent to eventsCh, it receives more than just send command.
Original Issue: apache/pulsar-client-go#687
Currently, when sending and consuming messages, we use eventsCh to receive command requests for message sending and receiving:
This looks OK under normal circumstances, but in extreme cases they may affect each other. For example, for send command, we have a parameter of maxPendingMessages locally, which is used as the size of the eventsCh cache(default: 1000):
Assuming that the local maxPendingMessages reaches the default threshold at this time, then the channel will enter a blocking state (in fact, this is completely possible), at this time related flush or close requests will also be blocked by this eventsCh, because they use the same channel(
eventsCh
).Once runEventsLoop is blocked, it will cause the following series of problems:
When a certain block action is triggered, this phenomenon will continue to exist until the Go SDK is restarted.
receiveCommand()
stuck and the gorutine isgopark
pprof.gateway.goroutine.008.pb.gz
In this case, when we try to
bin/pulsar-admin topics unload
this topic, it doesn't work, because in fact it is the block inside the Go SDK that caused the sending timeout. At this point, assuming that the Go SDK is restarted, this service can be restored immediately.So, Here I want to discuss whether we can split eventsCh, and each request corresponds to an independent Go Channel. This has two advantages: