Closed JFlath closed 2 months ago
I'm(and I'm sure others) are unable to view this at all without a user account, it's prompting me to login, what is it?
@ericmanlol Yeah, I'm afraid that's intentional. I appreciate it's not really good form to put non-public-followable links in a public issue, but sadly Github doesn't have a great solution to sharing data that's relevant to a public issue but which isn't sutable for sharing publicly.
In this instance, it's a private conversation discussion the context in which this issue was seen in the wild, as that context is relevant to the maintainer, but doesn't have an impact on the technical aspects of the issue.
Context cancellation for records is inspected before a produce request is sent OR after a produce request is sent. Only the current "head" record in a partition is inspected -- that is, the first record in the batch that is being written.
You can see the context inspected here in maybeFailErr
: https://github.com/twmb/franz-go/blob/a5f2b710830e32fc3f90374be47ec59849807342/pkg/kgo/sink.go#L1423-L1427
You can see that maybeFailErr
is checked as a request is being written (before being sent) here: https://github.com/twmb/franz-go/blob/a5f2b710830e32fc3f90374be47ec59849807342/pkg/kgo/sink.go#L1633-L1635 -- note this only applies if the batch can fail.
You can see maybeFailErr
checked after request failure here: https://github.com/twmb/franz-go/blob/a5f2b710830e32fc3f90374be47ec59849807342/pkg/kgo/sink.go#L945-L946
It is checked in one other location which isn't relevant for this issue.
The problem that is happening here is actually not in the logs in the issue report, but in logs that come a bit earlier:
2024-06-04T11:55:45.114Z DEBUG producer_client kzap/kzap.go:110 wrote Produce v7 {"broker": "1", "bytes_written": 131, "write_wait": "16.95µs", "time_to_write": "19.28µs", "err": null}
2024-06-04T11:55:45.114Z DEBUG producer_client kzap/kzap.go:110 read Produce v7 {"broker": "1", "bytes_read": 0, "read_wait": "50.59µs", "time_to_read": "3.19µs", "err": "EOF"}
2024-06-04T11:55:45.114Z DEBUG producer_client kzap/kzap.go:110 read from broker errored, killing connection {"addr": "redpanda-1.redpanda.levente.svc.cluster.local.:9092", "broker": "1", "successful_reads": 7651, "err": "EOF"}
At this point, the client has written a produce request but has NOT received a response. The client cannot assume either which way about the status of whether the broker actually received and processed the request (and the response was lost) or if the broker never received the request at all.
One key thing to note is that if you are producing with idempotency configured, then every record produced has a sequence number that must be one higher than the prior sequence number. The only way to reset sequence numbers is if you get a new producer ID or if you bump the epoch being used for the current producer ID.
There are two scenarios:
The broker received and processed the request, but the response was lost. Let's say the client allowed the records to fail be failed (i.e., what is being requested in this issue). When the buffered records are failed, the sequence number is reset to the last known produced sequence number. The next time you produce, the client will re-use a sequence number that is actually already on the broker (because the request was processed!). The client will receive an either an OutOfOrderSequenceNumber error. This error is actually used to indicate data loss occurred. There is no way for the client to know the actual sequence number it should be producing at or if any data was lost (tbh there could be improvements within Kafka here to actually indicate the status of things beter). The only thing the client can do is to bump the producer epoch and reset sequence numbers internally. Unfortunately, the produce ID and sequence numbers exist to prevent duplicates, so the very process of resetting means we have a chance to allow duplicate data.
The broker did not receive the request. In this case, if we failed the records, everything would work perfectly and behave as you'd like.
Unfortunately, we can't assume the latter case, so I've implemented the pessimistic view that produce requests that are written but do not receive a response prevent any partitions in that produce request from having their records failed.
That said, before I looked into the logs more and actually figured to understand the issue, I assumed this was due to the context being canceled before a producer ID was being received, and that the producer ID request was repeatedly failing, so I also went ahead and implemented the possibility to fail due to context cancelation in one more location. I can push that.
Closing due to the above explanation.
In the below example we see a produce request fail (due to a broker restart) and repeated attempts to connect to that broker with the default
RequestTimeoutOverhead
of 10s:However the request was made with a
Context
passed down with a timeout of 1s:It looks like if we're in a connection retry loop, we might be waiting for that loop to exit before processing the
Context
cancellation?Full logs and example code shared out of band.