Closed dilverse closed 1 year ago
In my case, the storage space for kafka-connect's /tmp directory is insufficient when using s3 write with gzip compression (default 5MB). To check the usage of the kafka-connect tmp storage, you can run the command:
df -h
If the /tmp volume is not large enough, you can increase its size by updating the tmpDirSizeLimit parameter to a larger value.
... apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: connect spec: template: pod: tmpDirSizeLimit: 1Gi ...
In my case, the storage space for kafka-connect's /tmp directory is insufficient when using s3 write with gzip compression (default 5MB). To check the usage of the kafka-connect tmp storage, you can run the command:
df -h
If the /tmp volume is not large enough, you can increase its size by updating the tmpDirSizeLimit parameter to a larger value.
... apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: connect spec: template: pod: tmpDirSizeLimit: 1Gi ...
This fixed the issue for me. Thanks @shepherd44!
Hi,
I am trying to run the iceberg sync connector in K8s I tried to use both
Nessie/hive
catalog based connector ran into sameNo space left on device
after few mins. In the sample setup Producer creates 1000 records per second.Here is the connector configuration for Nessie catalog:
The CPU and Memory seems to keeps spiking as the time progresses and after sometime end up with the below stacktrace. Not sure why this occurs. I also set the broker
log.retention.ms
to10000
since I found in Buch of places broker could be running out of space but that doesn't seem to be the case here. To validate I even tried a different connector (io.confluent.connect.s3.S3SinkConnector
) that seems to work fine and I was able to store almost ~112M records in to the S3 bucket.