apache / airflow

Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
https://airflow.apache.org/
Apache License 2.0
36.5k stars 14.13k forks source link

Airflow Kafka Provider "commit_cadence" Not Working as Expected #34213

Open ahipp13 opened 1 year ago

ahipp13 commented 1 year ago

Apache Airflow version

Other Airflow 2 version (please specify below)

What happened

When running the Airflow Kafka Provider Operator "ConsumeFromTopicOperator", I had one of my runs fail. Naturally since I have the "commit_cadence" option set to "end_of_operator", I was expecting to have duplicate records since it should have not commit the offset because the operator failed. Well the day ended and my counts were off, and when I looked in my DB I found that during the time it failed is when it missed the messages. So when the DAG run failed the offset was for some reason still committed even though I had set it to "end_of_operator".

What you think should happen instead

Based on your description, the offset should not get committed until the operator has completed successfully. If the DAG fails, it should go back to the offset the operator started on.

How to reproduce

Run the Kafka Provider on a topic and mid DAG run fail it, and see if it goes back and gets the messages it missed. The connection information I used is:

{ "bootstrap.servers": SERVERS, "group.id": GROUPID, "auto.offset.reset": "earliest", "security.protocol": "SSL", "ssl.ca.location": "CA", "ssl.certificate.location": "CERT", "ssl.key.location": "KEY", "ssl.key.password": "PW" }

Operating System

PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"

Versions of Apache Airflow Providers

apache-airflow-providers-apache-kafka==1.1.2

Deployment

Official Apache Airflow Helm Chart

Deployment details

No response

Anything else

Looking through the Confluent Kafka Documentation, I suspect what is happening here is because for Confluent's consumers they have an option "enable.auto.commit" that defaults to true, and it commits the offset every 5 seconds (https://docs.confluent.io/platform/current/clients/consumer.html#id1). When I turned this option to false, it worked as expected and I was getting duplicate messages on fails.

I don't really know what the expected behavior here is, but either 1) the code should be changed to turn this option off in the source code or 2) the documentation should specifically say that you need to turn this option to false in order for the commit_cadence option to work.

Are you willing to submit PR?

Code of Conduct

Taragolis commented 1 year ago

I don't really know what the expected behavior here is, but either

1) the code should be changed to turn this option off in the source code

Do you know which part of apache-airflow-providers-apache-kafka should be changed?

or 2) the documentation should specifically say that you need to turn this option to false in order for the commit_cadence option to work.

Seems like you found solution and exactly know what should be done in case of usage Confluence Kafka. So maybe you could contribute this part into provider documentation? It could be easily done by click on Suggest a change on this page in https://airflow.apache.org/docs/apache-airflow-providers-apache-kafka/stable/operators/index.html ?

AmirAflak commented 1 year ago

would like to take this :)

ahipp13 commented 1 year ago

@Taragolis I do not know what part should be changed, but I feel as if something should. Basically while the "enable.auto.commit" option is on, it doesn't really matter what you put in the "commit_cadence" option in the operator, because its going to commit the offset every 5 seconds by default. To me it feels like there needs to be another option for the operator to specify whether you want to auto commit or not and the interval, and if you don't want to auto commit then you use the "commit_cadence" option. I would help but am too busy currently, I just wanted to alert this to everybody incase somebody else had the same discovery. I am sure the smart people on here can come up with a good solution :)

Taragolis commented 1 year ago

Or maybe solution already exists and you could provide required parameters to Consumer thought connection? https://airflow.apache.org/docs/apache-airflow-providers-apache-kafka/stable/connections/kafka.html#configuring-the-connection

AmirAflak commented 1 year ago

Or maybe solution already exists and you could provide required parameters to Consumer thought connection? https://airflow.apache.org/docs/apache-airflow-providers-apache-kafka/stable/connections/kafka.html#configuring-the-connection @ahipp13 Agreed, in this case you have to specify "enable.auto.commit": False in the extra field of Connection. have a look for potential examples : https://github.com/search?q=repo%3Aapache%2Fairflow+enable.auto.commit&type=code

AmirAflak commented 1 year ago

@Taragolis but if user did not specify that, "enable.auto.commit" would be on by default and in this case commit_cadence selection would be redundant.

wheelsapk commented 1 month ago

Visit the SASSA website and log into your account if you have Sassa status. You can usually check the status of your benefits and claims online. https://sassacheckup.co.za/