twmb / franz-go

franz-go contains a feature complete, pure Go library for interacting with Kafka from 0.8.0 through 3.7+. Producing, consuming, transacting, administrating, etc.
BSD 3-Clause "New" or "Revised" License
1.78k stars 182 forks source link

RequestTimeoutOverhead appears to override Context cancellation #769

Closed JFlath closed 2 months ago

JFlath commented 3 months ago

In the below example we see a produce request fail (due to a broker restart) and repeated attempts to connect to that broker with the default RequestTimeoutOverhead of 10s:

2024-06-04T12:55:45.348+0100    DEBUG   producer_client kzap/kzap.go:110        retry batches processed {"wanted_metadata_update": true, "triggering_metadata_update": true, "should_backoff": false}
2024-06-04T12:55:45.348+0100    INFO    producer_client kzap/kzap.go:114        metadata update triggered       {"why": "failed produce request triggered metadata update"}
2024-06-04T12:55:46.114+0100    DEBUG   producer_client kzap/kzap.go:110        opening connection to broker    {"addr": "redpanda-1.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "1"}
2024-06-04T12:55:46.115+0100    WARN    producer_client kzap/kzap.go:116        unable to open connection to broker     {"addr": "redpanda-1.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "1", "err": "dial tcp 10.XX.XX.XX:9092: connect: connection refused"}

2024-06-04T12:55:46.343+0100    DEBUG   producer_client kzap/kzap.go:110        opening connection to broker    {"addr": "redpanda-1.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "1"}
2024-06-04T12:55:56.352+0100    WARN    producer_client kzap/kzap.go:116        unable to open connection to broker     {"addr": "redpanda-1.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "1", "err": "dial tcp 10.XX.XX.XX:9092: i/o timeout"}

2024-06-04T12:55:56.352+0100    DEBUG   producer_client kzap/kzap.go:110        opening connection to broker    {"addr": "redpanda-1.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "1"}
2024-06-04T12:56:05.717+0100    DEBUG   producer_client kzap/kzap.go:110        reaped connections      {"time_since_last_reap": "20.009282863s", "reap_dur": "127.7µs", "num_reaped": 4}
2024-06-04T12:56:06.352+0100    WARN    producer_client kzap/kzap.go:116        unable to open connection to broker     {"addr": "redpanda-1.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "1", "err": "dial tcp 10.XX.XX.XX:9092: i/o timeout"}

2024-06-04T12:56:06.352+0100    DEBUG   producer_client kzap/kzap.go:110        opening connection to broker    {"addr": "redpanda-2.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "2"}
2024-06-04T12:56:06.353+0100    DEBUG   producer_client kzap/kzap.go:110        connection opened to broker     {"addr": "redpanda-2.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "2"}
2024-06-04T12:56:06.353+0100    DEBUG   producer_client kzap/kzap.go:110        connection initialized successfully     {"addr": "redpanda-2.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "2"}
2024-06-04T12:56:06.353+0100    DEBUG   producer_client kzap/kzap.go:110        wrote Metadata v7       {"broker": "2", "bytes_written": 34, "write_wait": "884.161µs", "time_to_write": "18.209µs", "err": null}
2024-06-04T12:56:06.354+0100    DEBUG   producer_client kzap/kzap.go:110        read Metadata v7        {"broker": "2", "bytes_read": 6694, "read_wait": "42.38µs", "time_to_read": "763.691µs", "err": null}

2024-06-04T12:56:06.355+0100    DEBUG   producer_client kzap/kzap.go:110        metadata refresh topic partition data changed   {"topic": "test_topic", "partition": 122, "new_leader": 2, "new_leader_epoch": 6, "old_leader": 1, "old_leader_epoch": 5}
2024-06-04T12:56:06.355+0100    DEBUG   producer_client kzap/kzap.go:110        opening connection to broker    {"addr": "redpanda-2.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "2"}
2024-06-04T12:56:06.355+0100    DEBUG   producer_client kzap/kzap.go:110        connection opened to broker     {"addr": "redpanda-2.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "2"}
2024-06-04T12:56:06.355+0100    DEBUG   producer_client kzap/kzap.go:110        connection initialized successfully     {"addr": "redpanda-2.redpanda.my_cluster.svc.cluster.local.:9092", "broker": "2"}

2024-06-04T12:56:06.355+0100    DEBUG   producer_client kzap/kzap.go:110        wrote Produce v7        {"broker": "2", "bytes_written": 131, "write_wait": "595.04µs", "time_to_write": "13.61µs", "err": null}
2024-06-04T12:56:06.357+0100    DEBUG   producer_client kzap/kzap.go:110        read Produce v7 {"broker": "2", "bytes_read": 62, "read_wait": "35.33µs", "time_to_read": "1.56906ms", "err": null}
2024-06-04T12:56:06.357+0100    DEBUG   producer_client kzap/kzap.go:110        produced        {"broker": "2", "to": "test_topic[122{1206=>1207}]"}
2024-06-04T12:56:06.357+0100    INFO    app     rb_produce_test/main.go:78      OK      {"partition": 122, "offset": 1206, "duration": "22.251886295s", "slow": true}

However the request was made with a Context passed down with a timeout of 1s:

var         timeout    = 1 * time.Second

ctx, cancel := context.WithTimeout(context.Background(), timeout)
record := kgo.Record{
Value: []byte("payload"),
Key:   []byte("kev"),
Topic: "test_topic",
}

_, err := c.ProduceSync(ctx, &record).First()

It looks like if we're in a connection retry loop, we might be waiting for that loop to exit before processing the Context cancellation?

Full logs and example code shared out of band.

JFlath commented 3 months ago

https://redpandadata.slack.com/archives/C03ALBZ276U/p1718371892420319?thread_ts=1717421131.859379&cid=C03ALBZ276U

ericmanlol commented 2 months ago

https://redpandadata.slack.com/archives/C03ALBZ276U/p1718371892420319?thread_ts=1717421131.859379&cid=C03ALBZ276U

I'm(and I'm sure others) are unable to view this at all without a user account, it's prompting me to login, what is it?

JFlath commented 2 months ago

@ericmanlol Yeah, I'm afraid that's intentional. I appreciate it's not really good form to put non-public-followable links in a public issue, but sadly Github doesn't have a great solution to sharing data that's relevant to a public issue but which isn't sutable for sharing publicly.

In this instance, it's a private conversation discussion the context in which this issue was seen in the wild, as that context is relevant to the maintainer, but doesn't have an impact on the technical aspects of the issue.

twmb commented 2 months ago

Context cancellation for records is inspected before a produce request is sent OR after a produce request is sent. Only the current "head" record in a partition is inspected -- that is, the first record in the batch that is being written.

You can see the context inspected here in maybeFailErr: https://github.com/twmb/franz-go/blob/a5f2b710830e32fc3f90374be47ec59849807342/pkg/kgo/sink.go#L1423-L1427

You can see that maybeFailErr is checked as a request is being written (before being sent) here: https://github.com/twmb/franz-go/blob/a5f2b710830e32fc3f90374be47ec59849807342/pkg/kgo/sink.go#L1633-L1635 -- note this only applies if the batch can fail.

You can see maybeFailErr checked after request failure here: https://github.com/twmb/franz-go/blob/a5f2b710830e32fc3f90374be47ec59849807342/pkg/kgo/sink.go#L945-L946

It is checked in one other location which isn't relevant for this issue.

The problem that is happening here is actually not in the logs in the issue report, but in logs that come a bit earlier:

2024-06-04T11:55:45.114Z        DEBUG   producer_client kzap/kzap.go:110        wrote Produce v7        {"broker": "1", "bytes_written": 131, "write_wait": "16.95µs", "time_to_write": "19.28µs", "err": null}
2024-06-04T11:55:45.114Z        DEBUG   producer_client kzap/kzap.go:110        read Produce v7 {"broker": "1", "bytes_read": 0, "read_wait": "50.59µs", "time_to_read": "3.19µs", "err": "EOF"}
2024-06-04T11:55:45.114Z        DEBUG   producer_client kzap/kzap.go:110        read from broker errored, killing connection    {"addr": "redpanda-1.redpanda.levente.svc.cluster.local.:9092", "broker": "1", "successful_reads": 7651, "err": "EOF"}

At this point, the client has written a produce request but has NOT received a response. The client cannot assume either which way about the status of whether the broker actually received and processed the request (and the response was lost) or if the broker never received the request at all.

One key thing to note is that if you are producing with idempotency configured, then every record produced has a sequence number that must be one higher than the prior sequence number. The only way to reset sequence numbers is if you get a new producer ID or if you bump the epoch being used for the current producer ID.

There are two scenarios:

Unfortunately, we can't assume the latter case, so I've implemented the pessimistic view that produce requests that are written but do not receive a response prevent any partitions in that produce request from having their records failed.


That said, before I looked into the logs more and actually figured to understand the issue, I assumed this was due to the context being canceled before a producer ID was being received, and that the producer ID request was repeatedly failing, so I also went ahead and implemented the possibility to fail due to context cancelation in one more location. I can push that.

twmb commented 2 months ago

Closing due to the above explanation.