Open almir-magazord opened 8 months ago
Are you succeeding in committing to the table at all, or are you OOMing before that?
While looking at S3, can you get an estimate of how many files are being written at once? This is on the list to get a custom jmx metric on in the future to make this easier.
Are you succeeding in committing to the table at all, or are you OOMing before that?
While looking at S3, can you get an estimate of how many files are being written at once? This is on the list to get a custom jmx metric on in the future to make this easier.
We have orders on the Kafka topic for every day since year 2011. Since we are partitioning by day and database (customer), there are a lot of folders (and files) created on S3. The OOM happens in the middle of process... on Kafka side, we can see the Sink connector "Rebalancing".... "Healthy"... and some minutes after, It went into error state. Looking into the consumer groups, the "lag" metrics show to us that no one message was consumed.... (lag = total messages).
To make another test, changed the partitioning fields from order_datetime,database
to __source_ts_ms,database
. Now we have only few partitions and no erros on sink. It's working... but we prefer to have this data partitioned by order_datetime
.
It seems the connector is making to much operations in memory and with small files before sync... (at least is what I'm thinking).
What could I try to deal with a bigger number of partitions?
Edit: after talking with a more experienced data engeneering, we concluded I'm partitioning my data to much. Changed partitioning by day + database to year + database. No errors after changing this. But despite my mistake, maybe a good idea checking why this error occured.
My theory is when you are cunrching through the backlog, you are creating a lot of FileAppenders. I don't know your exact setting but take the simplest case of a single target table and topic with one partition with no fan out. You would have an appender per partition in the messages it is going through. Writers are closed every five minutes. If you go over 50 partitions/customers in the records in that five minute period, you will have 50 open writers since the way Iceberg works is you need a distinct file per partition.
Each writer buffers some amount of data and flushes periodically --there are some settings to control this, including the parquet row size. As those fill up, it's a lot of heap being used up when you have many open writers.
Unfortunately if you want to do this you need to run Kafka Connect w/ more memory.
My theory is when you are cunrching through the backlog, you are creating a lot of FileAppenders. I don't know your exact setting but take the simplest case of a single target table and topic with one partition with no fan out. You would have an appender per partition in the messages it is going through. Writers are closed every five minutes. If you go over 50 partitions/customers in the records in that five minute period, you will have 50 open writers since the way Iceberg works is you need a distinct file per partition.
Each writer buffers some amount of data and flushes periodically --there are some settings to control this, including the parquet row size. As those fill up, it's a lot of heap being used up when you have many open writers.
Unfortunately if you want to do this you need to run Kafka Connect w/ more memory.
This makes a lot of sense. We reduced the partition granularity and the error disappeared.
Thanks!
Hello!
We started to use this connector to sink data from Kafka to S3 in Iceberg format.
The Kafka topic has 375,522 messages wth 231 MiB on size.
The connector config looks as follows:
We can see the table created in AWS Glue, a lot of folders in S3 (data and metadata)... but some minutes after the sink task starts, we get this error:
The docker instance where the sink is running has 8 GB of RAM with 6 GB allocated to Java Heap Space.
What could be causing this error? Thanks!