Closed joshzana closed 1 month ago
this log message is internal and only used to send a message to the trigger handler wrapper to close the handler if applicable.
it is designed to be filtered out and not visible to users.
you can see here that it is caught by the wrapper, and the handler is closed, and the record is not emitted.
are you seeing in the logs in your env?
what is your logging setup? are you doing any customizations?
Ah interesting! I should have mentioned that we use sentry for error monitoring, using a slightly customized config from what airflow has out of the box. I can take a look at our customizations to see if anything needs to be updated in light of airflow 2.6.1, but as a general rule we get alerted on any logger.error in scheduler, worker, or triggerer.
Just to follow up @dstandish I looked over airflow_local_settings.py and sentry.py and I don't see any recent changes in there that would prevent the "trigger end" log from bleeding out into sentry as an error log.
Interestingly, if I directly tail the triggerer logs from stdout, I don't see the message. It's only in sentry that this shows up.
Any ideas?
This issue has been automatically marked as stale because it has been open for 365 days without any activity. There has been several Airflow releases since last activity on this issue. Kindly asking to recheck the report against latest Airflow version and let us know if the issue is reproducible. The issue will be closed in next 30 days if no further activity occurs from the issue author.
This issue has been closed because it has not received response from the issue author.
Apache Airflow version
2.6.1
What happened
We just upgraded our Airflow instance from 2.5.2 to 2.6.1 and are now noticing errors in our monitoring system that just read "trigger end". We use an async operator from Fivetran, and all of the reports have fivetran.provider_async.triggers.FivetranTrigger` as the logger name.
I tracked the log down to this line: https://github.com/apache/airflow/blob/8d78f903ac9d25870c61ffe762da12796475dcaa/airflow/jobs/triggerer_job_runner.py#L630
AFACIT, level=100 is a super high level that is even higher than critical (source)
What you think should happen instead
The level should be info
Or else the log should be removed completely if this was added for debugging purposes.
How to reproduce
Make a DAG that uses the operator from https://github.com/astronomer/airflow-provider-fivetran-async Run that operator
Operating System
Ubuntu 20.04.3 LTS (Focal Fossa)
Versions of Apache Airflow Providers
apache-airflow-providers-airbyte==3.2.0 apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==6.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-docker==3.6.0 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.0.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==6.0.0 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-snowflake==4.0.5 apache-airflow-providers-sqlite==3.3.2 apache-airflow-providers-ssh==3.6.0
Deployment
Other Docker-based deployment
Deployment details
Kubernetes on AWS
Anything else
Problem seems to occur once per invocation of the Fivetran async operator.
Are you willing to submit PR?
Code of Conduct