Open justas200 opened 3 months ago
@justas200 we just had the same issue. we had to update these configs on the broker, connect and connector levels and then the issue was resolved.
We are getting the same error and have tried setting the max.request.size, buffer.size and partition.fetch.bytes at the consumer / producer level. Is there any way this can be resolved without increasing the Kafka broker settings, can we set a hard limit at the connector level only ?
@ArkaSarkar19 i don't think so; since this occurs due to very large control messages the connector is producing to the control topic, if the broker has a smaller limit it fails
Hello,
I've recently started using Iceberg Kafka Connect. I am sending data from Kafka to S3. The topic I am reading from is keeping information form 2 days and the size is approx. 22GB. It has 10 partitions.
Here is the kafka-connect config with sensitive information removed:
The problem I am having is with a single partition (approx 500 different distinct values) the connector is working just fine. If I add another partition with approx 10 distinct values I am getting the error
I have increased the following configs, however it does not have any influence:
Does anyone have any tips what I should look at to solve this problem? Logs don't show anything more than the error itself. Node metrics are fine - CPU and Mem below threshold. Not sure what else to look at. How come the message size appears to grow exponentially?