lyft / flinkk8soperator

Kubernetes operator that provides control plane for managing Apache Flink applications
Apache License 2.0
569 stars 159 forks source link

Configure s3 for checkpointing #246

Open mootezbessifi opened 2 years ago

mootezbessifi commented 2 years ago

Dears

It is recommended to copy the s3 fs proper jar plugin to the plugin path (weither it is hdfs or presto) before starting the job manager. How to support this from CR config part ??

My regards

liad5h commented 2 years ago

@mootezbessifi I don't know if this is still relevant, but this is what we did to store our checkpoints and savepoints in aws s3: under flinkConfig we set the following:

the s3.access-key and s3.secret-key are not required if you are using eks / ec2 with an IAM role that can access S3.

Create the S3 bucket.

In your dockerfile add (see https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/filesystems/plugins/ for more details):

RUN mkdir /opt/flink/plugins/s3-fs-hadoop/
RUN cp /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/s3-fs-hadoop/ && chown -R flink: /opt/flink/plugins/s3-fs-hadoop/