Closed zhbdesign closed 4 years ago
Check the scheduler to see the following log:
[2020-07-15 14:30:40,779] {scheduler_job.py:958} INFO - 2 tasks up for execution: <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_Discuss_inc 2020-07-15 14:30:36.508244+00:00 [scheduled]> <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_inc 2020-07-15 14:30:36.508244+00:00 [scheduled]> [2020-07-15 14:30:40,891] {scheduler_job.py:989} INFO - Figuring out tasks to run in Pool(name=default_pool) with 128 open slots and 2 task instances ready to be queued [2020-07-15 14:30:40,892] {scheduler_job.py:1017} INFO - DAG user_MySql_2_ClickHouse_increment_srt_Activity has 0/31 running and queued tasks [2020-07-15 14:30:40,892] {scheduler_job.py:1017} INFO - DAG user_MySql_2_ClickHouse_increment_srt_Activity has 1/31 running and queued tasks [2020-07-15 14:30:40,901] {scheduler_job.py:1067} INFO - Setting the following tasks to queued state: <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_Discuss_inc 2020-07-15 14:30:36.508244+00:00 [scheduled]> <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_inc 2020-07-15 14:30:36.508244+00:00 [scheduled]> [2020-07-15 14:30:40,925] {scheduler_job.py:1141} INFO - Setting the following 2 tasks to queued state: <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_Discuss_inc 2020-07-15 14:30:36.508244+00:00 [queued]> <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_inc 2020-07-15 14:30:36.508244+00:00 [queued]> [2020-07-15 14:30:40,925] {scheduler_job.py:1177} INFO - Sending ('user_MySql_2_ClickHouse_increment_srt_Activity', 'MySql_2_ClickHouse_Activity_Activity_Discuss_inc', datetime.datetime(2020, 7, 15, 14, 30, 36, 508244, tzinfo=<Timezone [UTC]>), 1) to executor with priority 1 and queue default [2020-07-15 14:30:40,926] {base_executor.py:58} INFO - Adding to queue: ['airflow', 'run', 'user_MySql_2_ClickHouse_increment_srt_Activity', 'MySql_2_ClickHouse_Activity_Activity_Discuss_inc', '2020-07-15T14:30:36.508244+00:00', '--local', '--pool', 'default_pool', '-sd', '/opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py'] [2020-07-15 14:30:40,926] {scheduler_job.py:1177} INFO - Sending ('user_MySql_2_ClickHouse_increment_srt_Activity', 'MySql_2_ClickHouse_Activity_Activity_inc', datetime.datetime(2020, 7, 15, 14, 30, 36, 508244, tzinfo=<Timezone [UTC]>), 1) to executor with priority 1 and queue default [2020-07-15 14:30:40,927] {base_executor.py:58} INFO - Adding to queue: ['airflow', 'run', 'user_MySql_2_ClickHouse_increment_srt_Activity', 'MySql_2_ClickHouse_Activity_Activity_inc', '2020-07-15T14:30:36.508244+00:00', '--local', '--pool', 'default_pool', '-sd', '/opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py'] [2020-07-15 14:30:51,338] {scheduler_job.py:1316} INFO - Executor reports execution of user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_inc execution_date=2020-07-15 14:30:36.508244+00:00 exited with status failed for try_number 1 [2020-07-15 14:30:51,356] {scheduler_job.py:1333} ERROR - Executor reports task instance <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_inc 2020-07-15 14:30:36.508244+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally? [2020-07-15 14:30:51,357] {dagbag.py:396} INFO - Filling up the DagBag from /opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py ClickHouse_url======================>>>>>>>>>>>>>>>>>>>>>>> jdbc:clickhouse://192.168.10.186:8123/ods ClickHouse_url======================>>>>>>>>>>>>>>>>>>>>>>> jdbc:clickhouse://192.168.10.186:8123/ods ClickHouse_url======================>>>>>>>>>>>>>>>>>>>>>>> jdbc:clickhouse://192.168.10.186:8123/ods ClickHouse_url======================>>>>>>>>>>>>>>>>>>>>>>> jdbc:clickhouse://192.168.10.186:8123/ods [2020-07-15 14:30:51,511] {taskinstance.py:1150} ERROR - Executor reports task instance <TaskInstance: user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_inc 2020-07-15 14:30:36.508244+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally? NoneType: None [2020-07-15 14:30:51,620] {taskinstance.py:1194} INFO - Marking task as FAILED. dag_id=user_MySql_2_ClickHouse_increment_srt_Activity, task_id=MySql_2_ClickHouse_Activity_Activity_inc, execution_date=20200715T143036, start_date=, end_date=20200715T143051 [2020-07-15 14:31:15,231] {scheduler_job.py:1316} INFO - Executor reports execution of user_MySql_2_ClickHouse_increment_srt_Activity.MySql_2_ClickHouse_Activity_Activity_Discuss_inc execution_date=2020-07-15 14:30:36.508244+00:00 exited with status success for try_number 1
As explained in the issue - please post your questions/troubleshooting at the Slack. (There is #troubleshooting channel there).
I use a cluster of four machine components. When I execute the task, the task has been distributed, but there will be errors for each machine. The log is as follows:
[2020-07-15 11:25:46,471: ERROR/ForkPoolWorker-1] None [2020-07-15 11:25:46,582: ERROR/ForkPoolWorker-2] Task airflow.executors.celery_executor.execute_command[c29ab0dd-7049-4aeb-9023-cda45b9d3462] raised unexpected: AirflowException('Celery command failed',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 78, in execute_command close_fds=True, env=env) File "/usr/local/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['airflow', 'run', 'user_MySql_2_ClickHouse_increment_srt_Activity', 'm2ctask_Homework_SubmitActivity_Member_inc', '2020-07-15T11:25:35.177616+00:00', '--local', '--pool', 'default_pool', '-sd', '/opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 412, in trace_task R = retval = fun(*args, *kwargs) File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 704, in protected_call return self.run(args, kwargs) File "/usr/local/lib/python3.6/site-packages/sentry_sdk/integrations/celery.py", line 171, in _inner reraise(exc_info) File "/usr/local/lib/python3.6/site-packages/sentry_sdk/_compat.py", line 57, in reraise raise value File "/usr/local/lib/python3.6/site-packages/sentry_sdk/integrations/celery.py", line 166, in _inner return f(args, kwargs) File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 83, in execute_command raise AirflowException('Celery command failed') airflow.exceptions.AirflowException: Celery command failed [2020-07-15 11:25:46,705: ERROR/ForkPoolWorker-1] Task airflow.executors.celery_executor.execute_command[efcd61c3-bae5-43a6-a2ba-ff584ee5a9e9] raised unexpected: AirflowException('Celery command failed',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 78, in execute_command close_fds=True, env=env) File "/usr/local/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['airflow', 'run', 'user_MySql_2_ClickHouse_increment_srt_Activity', 'MySql_2_ClickHouse_Activity_Category_inc', '2020-07-15T11:25:35.177616+00:00', '--local', '--pool', 'default_pool', '-sd', '/opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py']' returned non-zero exit status 1. View in Slack
I use a cluster of four machine components. When I execute the task, the task has been distributed, but there will be errors for each machine. The log is as follows:
[2020-07-15 11:25:46,471: ERROR/ForkPoolWorker-1] None [2020-07-15 11:25:46,582: ERROR/ForkPoolWorker-2] Task airflow.executors.celery_executor.execute_command[c29ab0dd-7049-4aeb-9023-cda45b9d3462] raised unexpected: AirflowException('Celery command failed',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 78, in execute_command close_fds=True, env=env) File "/usr/local/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['airflow', 'run', 'user_MySql_2_ClickHouse_increment_srt_Activity', 'm2ctask_Homework_SubmitActivity_Member_inc', '2020-07-15T11:25:35.177616+00:00', '--local', '--pool', 'default_pool', '-sd', '/opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 412, in trace_task R = retval = fun(*args, *kwargs) File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 704, in __protected_call__ return self.run(args, kwargs) File "/usr/local/lib/python3.6/site-packages/sentry_sdk/integrations/celery.py", line 171, in _inner reraise(exc_info) File "/usr/local/lib/python3.6/site-packages/sentry_sdk/_compat.py", line 57, in reraise raise value File "/usr/local/lib/python3.6/site-packages/sentry_sdk/integrations/celery.py", line 166, in _inner return f(args, kwargs) File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 83, in execute_command raise AirflowException('Celery command failed') airflow.exceptions.AirflowException: Celery command failed [2020-07-15 11:25:46,705: ERROR/ForkPoolWorker-1] Task airflow.executors.celery_executor.execute_command[efcd61c3-bae5-43a6-a2ba-ff584ee5a9e9] raised unexpected: AirflowException('Celery command failed',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 78, in execute_command close_fds=True, env=env) File "/usr/local/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['airflow', 'run', 'user_MySql_2_ClickHouse_increment_srt_Activity', 'MySql_2_ClickHouse_Activity_Category_inc', '2020-07-15T11:25:35.177616+00:00', '--local', '--pool', 'default_pool', '-sd', '/opt/airflow/dags/MySql_2_ClickHouse_srt_Activity.py']' returned non-zero exit status 1.
The configuration file is as follows:
[core]
The folder where your airflow pipelines live, most likely a
subfolder in a code repository. This path must be absolute.
dags_folder = /opt/airflow/dags
The folder where airflow should store its log files
This path must be absolute
base_log_folder = /opt/airflow/logs
Airflow can store logs remotely in AWS S3, Google Cloud Storage or Elastic Search.
Set this to True if you want to enable remote logging.
remote_logging = False
Users must supply an Airflow connection id that provides access to the storage
location.
remote_log_conn_id = remote_base_log_folder = encrypt_s3_logs = False
Logging level
logging_level = INFO
Logging level for Flask-appbuilder UI
fab_logging_level = INFO
Logging class
Specify the class that will specify the logging configuration
This class has to be on the python classpath
Example: logging_config_class = my.path.default_local_settings.LOGGING_CONFIG
logging_config_class =
Flag to enable/disable Colored logs in Console
Colour the logs when the controlling terminal is a TTY.
colored_console_log = True
Log format for when Colored logs is enabled
colored_log_format = [%%(blue)s%%(asctime)s%%(reset)s] {%%(blue)s%%(filename)s:%%(reset)s%%(lineno)d} %%(log_color)s%%(levelname)s%%(reset)s - %%(log_color)s%%(message)s%%(reset)s colored_formatter_class = airflow.utils.log.colored_log.CustomTTYColoredFormatter
Format of Log line
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
Log filename format
log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log log_processor_filename_template = {{ filename }}.log dag_processor_manager_log_location = /opt/airflow/logs/dag_processor_manager/dag_processor_manager.log
Name of handler to read task instance logs.
Default to use task handler.
task_log_reader = task
Hostname by providing a path to a callable, which will resolve the hostname.
The format is "package:function".
#
For example, default value "socket:getfqdn" means that result from getfqdn() of "socket"
package will be used as hostname.
#
No argument should be required in the function specified.
If using IP address as hostname is preferred, use value
airflow.utils.net:get_host_ip_address
hostname_callable = socket:getfqdn
Default timezone in case supplied date times are naive
can be utc (default), system, or any IANA timezone string (e.g. Europe/Amsterdam)
default_timezone = utc
The executor class that airflow should use. Choices include
SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor, KubernetesExecutor
executor = CeleryExecutor
The SqlAlchemy connection string to the metadata database.
SqlAlchemy supports many different database engine, more information
their website
sql_alchemy_conn = mysql://root:123456@192.168.10.179:3305/airflow
The encoding for the databases
sql_engine_encoding = utf-8
If SqlAlchemy should pool database connections.
sql_alchemy_pool_enabled = True
The SqlAlchemy pool size is the maximum number of database connections
in the pool. 0 indicates no limit.
sql_alchemy_pool_size = 5
The maximum overflow size of the pool.
When the number of checked-out connections reaches the size set in pool_size,
additional connections will be returned up to this limit.
When those additional connections are returned to the pool, they are disconnected and discarded.
It follows then that the total number of simultaneous connections the pool will allow
is pool_size + max_overflow,
and the total number of "sleeping" connections the pool will allow is pool_size.
max_overflow can be set to -1 to indicate no overflow limit;
no limit will be placed on the total number of concurrent connections. Defaults to 10.
sql_alchemy_max_overflow = 10
The SqlAlchemy pool recycle is the number of seconds a connection
can be idle in the pool before it is invalidated. This config does
not apply to sqlite. If the number of DB connections is ever exceeded,
a lower config value will allow the system to recover faster.
sql_alchemy_pool_recycle = 1800
Check connection at the start of each connection pool checkout.
Typically, this is a simple statement like "SELECT 1".
More information here:
https://docs.sqlalchemy.org/en/13/core/pooling.html#disconnect-handling-pessimistic
sql_alchemy_pool_pre_ping = True
The schema to use for the metadata database.
SqlAlchemy supports databases with the concept of multiple schemas.
sql_alchemy_schema =
Import path for connect args in SqlAlchemy. Default to an empty dict.
This is useful when you want to configure db engine args that SqlAlchemy won't parse
in connection string.
See https://docs.sqlalchemy.org/en/13/core/engines.html#sqlalchemy.create_engine.params.connect_args
sql_alchemy_connect_args =
The amount of parallelism as a setting to the executor. This defines
the max number of task instances that should run simultaneously
on this airflow installation
parallelism = 31
The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 31
Are DAGs paused by default at creation
dags_are_paused_at_creation = True
The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 1
Whether to load the DAG examples that ship with Airflow. It's good to
get started, but you probably want to set this to False in a production
environment
load_examples = False
Whether to load the default connections that ship with Airflow. It's good to
get started, but you probably want to set this to False in a production
environment
load_default_connections = True
Where your Airflow plugins are stored
plugins_folder = /opt/airflow/plugins
Secret key to save connection passwords in the db
fernet_key = 3_xwBV05j2c4ivLdnSCou3XMmXtOtOnSg0LhK1J6GeQ=
Whether to disable pickling dags
donot_pickle = False
How long before timing out a python file import
dagbag_import_timeout = 30
How long before timing out a DagFileProcessor, which processes a dag file
dag_file_processor_timeout = 50
The class to use for running task instances in a subprocess
task_runner = StandardTaskRunner
If set, tasks without a
run_as_user
argument will be run with this userCan be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation =
What security module to use (for example kerberos)
security =
If set to False enables some unsecure features like Charts and Ad Hoc Queries.
In 2.0 will default to True.
secure_mode = False
Turn unit test mode on (overwrites many configuration options with test
values at runtime)
unit_test_mode = False
Whether to enable pickling for xcom (note that this is insecure and allows for
RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
When a task is killed forcefully, this is the amount of time in seconds that
it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
Whether to override params with dag_run.conf. If you pass some key-value pairs
through
airflow dags backfill -c
orairflow dags trigger -c
, the key-value pairs will override the existing ones in params.dag_run_conf_overrides_params = False
Worker initialisation check to validate Metadata Database connection
worker_precheck = False
When discovering DAGs, ignore any files that don't contain the strings
DAG
andairflow
.dag_discovery_safe_mode = True
The number of retries each task is going to have by default. Can be overridden at dag or task level.
default_task_retries = 0
Whether to serialise DAGs and persist them in DB.
If set to True, Webserver reads from DB instead of parsing DAG files
More details: https://airflow.apache.org/docs/stable/dag-serialization.html
store_serialized_dags = False
Updating serialized DAG can not be faster than a minimum interval to reduce database write rate.
min_serialized_dag_update_interval = 30
Whether to persist DAG files code in DB.
If set to True, Webserver reads file contents from DB instead of
trying to access files in a DAG folder. Defaults to same as the
store_serialized_dags
setting.Example: store_dag_code = False
store_dag_code =
Maximum number of Rendered Task Instance Fields (Template Fields) per task to store
in the Database.
When Dag Serialization is enabled (
store_serialized_dags=True
), all the template_fieldsfor each of Task Instance are stored in the Database.
Keeping this number small may cause an error when you try to view
Rendered
tab inTaskInstance view for older tasks.
max_num_rendered_ti_fields_per_task = 30
On each dagrun check against defined SLAs
check_slas = True
[secrets]
Full class name of secrets backend to enable (will precede env vars and metastore in search path)
Example: backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend
backend =
The backend_kwargs param is loaded into a dictionary and passed to init of secrets backend class.
See documentation for the secrets backend you are using. JSON is expected.
Example for AWS Systems Manager ParameterStore:
{"connections_prefix": "/airflow/connections", "profile_name": "default"}
backend_kwargs =
[cli]
In what way should the cli access the API. The LocalClient will use the
database directly, while the json_client will use the api running on the
webserver
api_client = airflow.api.client.local_client
If you set web_server_url_prefix, do NOT forget to append it here, ex:
endpoint_url = http://localhost:8080/myroot
So api will look like:
http://localhost:8080/myroot/api/experimental/...
endpoint_url = http://localhost:8080
[debug]
Used only with DebugExecutor. If set to True DAG will fail with first
failed task. Helpful for debugging purposes.
fail_fast = False
[api]
How to authenticate users of the API. See
https://airflow.apache.org/docs/stable/security.html for possible values.
("airflow.api.auth.backend.default" allows all requests for historic reasons)
auth_backend = airflow.api.auth.backend.deny_all
[lineage]
what lineage backend to use
backend =
[atlas] sasl_enabled = False host = port = 21000 username = password =
[operators]
The default owner assigned to each new operator, unless
provided explicitly or passed via
default_args
default_owner = airflow default_cpus = 1 default_ram = 512 default_disk = 512 default_gpus = 0
[hive]
Default mapreduce queue for HiveOperator tasks
default_hive_mapred_queue =
[webserver]
The base url of your website as airflow cannot guess what domain or
cname you are using. This is used in automated emails that
airflow sends to point links to the right web server
base_url = http://localhost:8080
Default timezone to display all dates in the RBAC UI, can be UTC, system, or
any IANA timezone string (e.g. Europe/Amsterdam). If left empty the
default value of core/default_timezone will be used
Example: default_ui_timezone = America/New_York
default_ui_timezone = Asia/Shanghai
The ip specified when starting the web server
web_server_host = 0.0.0.0
The port on which to run the web server
web_server_port = 8080
Paths to the SSL certificate and key for the web server. When both are
provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
Paths to the SSL certificate and key for the web server. When both are
provided SSL will be enabled. This does not change the web server port.
web_server_ssl_key =
Number of seconds the webserver waits before killing gunicorn master that doesn't respond
web_server_master_timeout = 120
Number of seconds the gunicorn webserver waits before timing out on a worker
web_server_worker_timeout = 120
Number of workers to refresh at a time. When set to 0, worker refresh is
disabled. When nonzero, airflow periodically refreshes webserver workers by
bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
If set to True, Airflow will track files in plugins_folder directory. When it detects changes,
then reload the gunicorn.
reload_on_plugin_change = False
Secret key used to run your flask app
It should be as random as possible
secret_key = temporary_key
Number of workers to run the Gunicorn web server
workers = 4
The worker class gunicorn should use. Choices include
sync (default), eventlet, gevent
worker_class = sync
Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = -
Log files for the gunicorn webserver. '-' means log to stderr.
error_logfile = -
Expose the configuration file in the web server
expose_config = True
Expose hostname in the web server
expose_hostname = True
Expose stacktrace in the web server
expose_stacktrace = True
Set to true to turn on authentication:
https://airflow.apache.org/security.html#web-authentication
authenticate = False
Filter the list of dags by owner name (requires authentication to be enabled)
filter_by_owner = False
Filtering mode. Choices include user (default) and ldapgroup.
Ldap group filtering requires using the ldap backend
#
Note that the ldap server needs the "memberOf" overlay to be set up
in order to user the ldapgroup mode.
owner_mode = user
Default DAG view. Valid values are:
tree, graph, duration, gantt, landing_times
dag_default_view = tree
"Default DAG orientation. Valid values are:"
LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top)
dag_orientation = LR
Puts the webserver in demonstration mode; blurs the names of Operators for
privacy.
demo_mode = False
The amount of time (in secs) webserver will wait for initial handshake
while fetching logs from other worker machine
log_fetch_timeout_sec = 5
Time interval (in secs) to wait before next log fetching.
log_fetch_delay_sec = 2
Distance away from page bottom to enable auto tailing.
log_auto_tailing_offset = 30
Animation speed for auto tailing log display.
log_animation_speed = 1000
By default, the webserver shows paused DAGs. Flip this to hide paused
DAGs by default
hide_paused_dags_by_default = False
Consistent page size across all listing views in the UI
page_size = 100
Use FAB-based webserver with RBAC feature
rbac = False
Define the color of navigation bar
navbar_color = #007A87
Default dagrun to show in UI
default_dag_run_display_number = 25
Enable werkzeug
ProxyFix
middleware for reverse proxyenable_proxy_fix = False
Number of values to trust for
X-Forwarded-For
.More info: https://werkzeug.palletsprojects.com/en/0.16.x/middleware/proxy_fix/
proxy_fix_x_for = 1
Number of values to trust for
X-Forwarded-Proto
proxy_fix_x_proto = 1
Number of values to trust for
X-Forwarded-Host
proxy_fix_x_host = 1
Number of values to trust for
X-Forwarded-Port
proxy_fix_x_port = 1
Number of values to trust for
X-Forwarded-Prefix
proxy_fix_x_prefix = 1
Set secure flag on session cookie
cookie_secure = False
Set samesite policy on session cookie
cookie_samesite =
Default setting for wrap toggle on DAG code and TI log views.
default_wrap = False
Allow the UI to be rendered in a frame
x_frame_enabled = True
Send anonymous user activity to your analytics tool
choose from google_analytics, segment, or metarouter
analytics_tool =
Unique ID of your account in the analytics tool
analytics_id =
Update FAB permissions and sync security manager roles
on webserver startup
update_fab_perms = True
Minutes of non-activity before logged out from UI
0 means never get forcibly logged out
force_log_out_after = 0
The UI cookie lifetime in days
session_lifetime_days = 30
[email] email_backend = airflow.utils.email.send_email_smtp
[smtp]
If you want airflow to send emails on retries, failure, and you want to use
the airflow.utils.email.send_email_smtp function, you have to configure an
smtp server here
smtp_host = localhost smtp_starttls = True smtp_ssl = False
Example: smtp_user = airflow
smtp_user =
Example: smtp_password = airflow
smtp_password =
smtp_port = 25 smtp_mail_from = airflow@example.com
[sentry]
Sentry (https://docs.sentry.io) integration
sentry_dsn =
[celery]
This section only applies if you are using the CeleryExecutor in
[core]
section aboveThe app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
The concurrency that will be used when starting workers with the
airflow celery worker
command. This defines the number of task instances thata worker will take, so size up your workers based on the resources on
your worker box and the nature of your tasks
worker_concurrency = 16
The maximum and minimum concurrency that will be used when starting workers with the
airflow celery worker
command (always keep minimum processes, but growto maximum if necessary). Note the value should be max_concurrency,min_concurrency
Pick these numbers based on resources on worker box and the nature of the task.
If autoscale option is available, worker_concurrency will be ignored.
http://docs.celeryproject.org/en/latest/reference/celery.bin.worker.html#cmdoption-celery-worker-autoscale
Example: worker_autoscale = 16,12
worker_autoscale =
When you start an airflow worker, airflow starts a tiny web server
subprocess to serve the workers local log files to the airflow main
web server, who then builds pages and sends them to users. This defines
the port on which the logs are served. It needs to be unused, and open
visible from the main web server to connect into the workers.
worker_log_server_port = 8793
The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
a sqlalchemy database. Refer to the Celery documentation for more
information.
http://docs.celeryproject.org/en/latest/userguide/configuration.html#broker-settings
broker_url = redis://192.168.10.27:6379/0
The Celery result_backend. When a job finishes, it needs to update the
metadata of the job. Therefore it will post a message on a message bus,
or insert it into a database (depending of the backend)
This status is used by the scheduler to update the state of the task
The use of a database is highly recommended
http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-result-backend-settings
result_backend = db+mysql://root:123456@192.168.10.179:3305/airflow
Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
it
airflow flower
. This defines the IP that Celery Flower runs onflower_host = 0.0.0.0
The root URL for Flower
Example: flower_url_prefix = /flower
flower_url_prefix =
This defines the port that Celery Flower runs on
flower_port = 5555
Securing Flower with Basic Authentication
Accepts user:password pairs separated by a comma
Example: flower_basic_auth = user1:password1,user2:password2
flower_basic_auth =
Default queue that tasks get assigned to and that worker listen on.
default_queue = default
How many processes CeleryExecutor uses to sync task state.
0 means to use max(1, number of cores - 1) processes.
sync_parallelism = 0
Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
In case of using SSL
ssl_active = False ssl_key = ssl_cert = ssl_cacert =
Celery Pool implementation.
Choices include: prefork (default), eventlet, gevent or solo.
See:
https://docs.celeryproject.org/en/latest/userguide/workers.html#concurrency
https://docs.celeryproject.org/en/latest/userguide/concurrency/eventlet.html
pool = prefork
The number of seconds to wait before timing out
send_task_to_executor
orfetch_celery_task_state
operations.operation_timeout = 2
[celery_broker_transport_options]
This section is for specifying options which can be passed to the
underlying celery broker transport. See:
http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-broker_transport_options
The visibility timeout defines the number of seconds to wait for the worker
to acknowledge the task before the message is redelivered to another worker.
Make sure to increase the visibility timeout to match the time of the longest
ETA you're planning to use.
visibility_timeout is only supported for Redis and SQS celery brokers.
See:
http://docs.celeryproject.org/en/master/userguide/configuration.html#std:setting-broker_transport_options
Example: visibility_timeout = 21600
visibility_timeout =
[dask]
This section only applies if you are using the DaskExecutor in
[core] section above
The IP address and port of the Dask cluster's scheduler.
cluster_address = 127.0.0.1:8786
TLS/ SSL settings to access a secured Dask scheduler.
tls_ca = tls_cert = tls_key =
[scheduler]
Task instances listen for external kill signal (when you clear tasks
from the CLI or the UI), this defines the frequency at which they should
listen (in seconds).
job_heartbeat_sec = 5
The scheduler constantly tries to trigger new tasks (look at the
scheduler section in the docs for more information). This defines
how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 5
After how much time should the scheduler terminate in seconds
-1 indicates to run continuously (see also num_runs)
run_duration = -1
The number of times to try to schedule each DAG file
-1 indicates unlimited number
num_runs = -1
The number of seconds to wait between consecutive DAG file processing
processor_poll_interval = 1
after how much time (seconds) a new DAGs should be picked up from the filesystem
min_file_process_interval = 10
How often (in seconds) to scan the DAGs directory for new files. Default to 5 minutes.
dag_dir_list_interval = 60
How often should stats be printed to the logs. Setting to 0 will disable printing stats
print_stats_interval = 30
If the last scheduler heartbeat happened more than scheduler_health_check_threshold
ago (in seconds), scheduler is considered unhealthy.
This is used by the health check in the "/health" endpoint
scheduler_health_check_threshold = 30 child_process_log_directory = /opt/airflow/logs/scheduler
Local task jobs periodically heartbeat to the DB. If the job has
not heartbeat in this many seconds, the scheduler will mark the
associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
Turn off scheduler catchup by setting this to False.
Default behavior is unchanged and
Command Line Backfills still work, but the scheduler
will not do scheduler catchup if this is False,
however it can be set on a per DAG basis in the
DAG definition (catchup)
catchup_by_default = True
This changes the batch size of queries in the scheduling main loop.
If this is too high, SQL query performance may be impacted by one
or more of the following:
- reversion to full table scan
- complexity of query predicate
- excessive locking
Additionally, you may hit the maximum allowable query length for your db.
Set this to 0 for no limit (not advised)
max_tis_per_query = 512
Statsd (https://github.com/etsy/statsd) integration settings
statsd_on = False statsd_host = localhost statsd_port = 8125 statsd_prefix = airflow
If you want to avoid send all the available metrics to StatsD,
you can configure an allow list of prefixes to send only the metrics that
start with the elements of the list (e.g: scheduler,executor,dagrun)
statsd_allow_list =
The scheduler can run multiple threads in parallel to schedule dags.
This defines how many threads will run.
max_threads = 31 authenticate = False
Turn off scheduler use of cron intervals by setting this to False.
DAGs submitted manually in the web UI or with trigger_dag will still run.
use_job_schedule = True
Allow externally triggered DagRuns for Execution Dates in the future
Only has effect if schedule_interval is set to None in DAG
allow_trigger_in_future = False
[ldap]
set this to ldaps://:
uri = user_filter = objectClass=* user_name_attr = uid group_member_attr = memberOf superuser_filter = data_profiler_filter = bind_user = cn=Manager,dc=example,dc=com bind_password = insecure basedn = dc=example,dc=com cacert = /etc/ca/ldap_ca.crt search_scope = LEVEL
This setting allows the use of LDAP servers that either return a
broken schema, or do not return a schema.
ignore_malformed_schema = False
[mesos]
Mesos master address which MesosExecutor will connect to.
master = localhost:5050
The framework name which Airflow scheduler will register itself as on mesos
framework_name = Airflow
Number of cpu cores required for running one task instance using
'airflow run --local -p '
command on a mesos slave
task_cpu = 1
Memory in MB required for running one task instance using
'airflow run --local -p '
command on a mesos slave
task_memory = 256
Enable framework checkpointing for mesos
See http://mesos.apache.org/documentation/latest/slave-recovery/
checkpoint = False
Failover timeout in milliseconds.
When checkpointing is enabled and this option is set, Mesos waits
until the configured timeout for
the MesosExecutor framework to re-register after a failover. Mesos
shuts down running tasks if the
MesosExecutor framework fails to re-register within this timeframe.
Example: failover_timeout = 604800
failover_timeout =
Enable framework authentication for mesos
See http://mesos.apache.org/documentation/latest/configuration/
authenticate = False
Mesos credentials, if authentication is enabled
Example: default_principal = admin
default_principal =
Example: default_secret = admin
default_secret =
Optional Docker Image to run on slave before running the command
This image should be accessible from mesos slave i.e mesos slave
should be able to pull this docker image before executing the command.
Example: docker_image_slave = puckel/docker-airflow
docker_image_slave =
[kerberos] ccache = /tmp/airflow_krb5_ccache
gets augmented with fqdn
principal = airflow reinit_frequency = 3600 kinit_path = kinit keytab = airflow.keytab
[github_enterprise] api_rev = v3
[admin]
UI to hide sensitive variable fields when set to True
hide_sensitive_variable_fields = True
[elasticsearch]
Elasticsearch host
host =
Format of the log_id, which is used to query for a given tasks logs
log_id_template = {dag_id}-{task_id}-{execution_date}-{try_number}
Used to mark the end of a log stream for a task
end_of_log_mark = end_of_log
Qualified URL for an elasticsearch frontend (like Kibana) with a template argument for log_id
Code will construct log_id using the log_id template from the argument above.
NOTE: The code will prefix the https:// automatically, don't include that here.
frontend =
Write the task logs to the stdout of the worker, rather than the default files
write_stdout = False
Instead of the default log formatter, write the log lines as JSON
json_format = False
Log fields to also attach to the json output, if enabled
json_fields = asctime, filename, lineno, levelname, message
[elasticsearch_configs] use_ssl = False verify_certs = True
[kubernetes]
The repository, tag and imagePullPolicy of the Kubernetes Image for the Worker to Run
worker_container_repository =
Path to the YAML pod file. If set, all other kubernetes-related fields are ignored.
pod_template_file = worker_container_tag = worker_container_image_pull_policy = IfNotPresent
If True, all worker pods will be deleted upon termination
delete_worker_pods = True
If False (and delete_worker_pods is True),
failed worker pods will not be deleted so users can investigate them.
delete_worker_pods_on_failure = False
Number of Kubernetes Worker Pod creation calls per scheduler loop
worker_pods_creation_batch_size = 1
The Kubernetes namespace where airflow workers should be created. Defaults to
default
namespace = default
The name of the Kubernetes ConfigMap containing the Airflow Configuration (this file)
Example: airflow_configmap = airflow-configmap
airflow_configmap =
The name of the Kubernetes ConfigMap containing
airflow_local_settings.py
file.#
For example:
#
airflow_local_settings_configmap = "airflow-configmap"
if you have the following ConfigMap.#
airflow-configmap.yaml
:#
.. code-block:: yaml
#
---
apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-configmap
data:
airflow_local_settings.py: |
def pod_mutation_hook(pod):
...
airflow.cfg: |
...
Example: airflow_local_settings_configmap = airflow-configmap
airflow_local_settings_configmap =
For docker image already contains DAGs, this is set to
True
, and the worker willsearch for dags in dags_folder,
otherwise use git sync or dags volume claim to mount DAGs
dags_in_image = False
For either git sync or volume mounted DAGs, the worker will look in this subpath for DAGs
dags_volume_subpath =
For either git sync or volume mounted DAGs, the worker will mount the volume in this path
dags_volume_mount_point =
For DAGs mounted via a volume claim (mutually exclusive with git-sync and host path)
dags_volume_claim =
For volume mounted logs, the worker will look in this subpath for logs
logs_volume_subpath =
A shared volume claim for the logs
logs_volume_claim =
For DAGs mounted via a hostPath volume (mutually exclusive with volume claim and git-sync)
Useful in local environment, discouraged in production
dags_volume_host =
A hostPath volume for the logs
Useful in local environment, discouraged in production
logs_volume_host =
A list of configMapsRefs to envFrom. If more than one configMap is
specified, provide a comma separated list: configmap_a,configmap_b
env_from_configmap_ref =
A list of secretRefs to envFrom. If more than one secret is
specified, provide a comma separated list: secret_a,secret_b
env_from_secret_ref =
Git credentials and repository for DAGs mounted via Git (mutually exclusive with volume claim)
git_repo = git_branch =
Use a shallow clone with a history truncated to the specified number of commits.
0 - do not use shallow clone.
git_sync_depth = 1 git_subpath =
The specific rev or hash the git_sync init container will checkout
This becomes GIT_SYNC_REV environment variable in the git_sync init container for worker pods
git_sync_rev =
Use git_user and git_password for user authentication or git_ssh_key_secret_name
and git_ssh_key_secret_key for SSH authentication
git_user = git_password = git_sync_root = /git git_sync_dest = repo
Mount point of the volume if git-sync is being used.
i.e. /opt/airflow/dags
git_dags_folder_mount_point =
To get Git-sync SSH authentication set up follow this format
#
airflow-secrets.yaml
:#
.. code-block:: yaml
#
---
apiVersion: v1
kind: Secret
metadata:
name: airflow-secrets
data:
key needs to be gitSshKey
gitSshKey:
Example: git_ssh_key_secret_name = airflow-secrets
git_ssh_key_secret_name =
To get Git-sync SSH authentication set up follow this format
#
airflow-configmap.yaml
:#
.. code-block:: yaml
#
---
apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-configmap
data:
known_hosts: |
github.com ssh-rsa <...>
airflow.cfg: |
...
Example: git_ssh_known_hosts_configmap_name = airflow-configmap
git_ssh_known_hosts_configmap_name =
To give the git_sync init container credentials via a secret, create a secret
with two fields: GIT_SYNC_USERNAME and GIT_SYNC_PASSWORD (example below) and
add
git_sync_credentials_secret = <secret_name>
to your airflow config under thekubernetes
section#
Secret Example:
#
.. code-block:: yaml
#
---
apiVersion: v1
kind: Secret
metadata:
name: git-credentials
data:
GIT_SYNC_USERNAME:
GIT_SYNC_PASSWORD:
git_sync_credentials_secret =
For cloning DAGs from git repositories into volumes: https://github.com/kubernetes/git-sync
git_sync_container_repository = k8s.gcr.io/git-sync git_sync_container_tag = v3.1.1 git_sync_init_container_name = git-sync-clone git_sync_run_as_user = 65533
The name of the Kubernetes service account to be associated with airflow workers, if any.
Service accounts are required for workers that require access to secrets or cluster resources.
See the Kubernetes RBAC documentation for more:
https://kubernetes.io/docs/admin/authorization/rbac/
worker_service_account_name =
Any image pull secrets to be given to worker pods, If more than one secret is
required, provide a comma separated list: secret_a,secret_b
image_pull_secrets =
GCP Service Account Keys to be provided to tasks run on Kubernetes Executors
Should be supplied in the format: key-name-1:key-path-1,key-name-2:key-path-2
gcp_service_account_keys =
Use the service account kubernetes gives to pods to connect to kubernetes cluster.
It's intended for clients that expect to be running inside a pod running on kubernetes.
It will raise an exception if called from a process not running in a kubernetes environment.
in_cluster = True
When running with in_cluster=False change the default cluster_context or config_file
options to Kubernetes client. Leave blank these to use default behaviour like
kubectl
has.cluster_context =
config_file =
Affinity configuration as a single line formatted JSON object.
See the affinity model for top-level key names (e.g.
nodeAffinity
, etc.):https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#affinity-v1-core
affinity =
A list of toleration objects as a single line formatted JSON array
See:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#toleration-v1-core
tolerations =
Keyword parameters to pass while calling a kubernetes client core_v1_api methods
from Kubernetes Executor provided as a single line formatted JSON dictionary string.
List of supported params are similar for all core_v1_apis, hence a single config
variable for all apis.
See:
https://raw.githubusercontent.com/kubernetes-client/python/master/kubernetes/client/apis/core_v1_api.py
Note that if no _request_timeout is specified, the kubernetes client will wait indefinitely
for kubernetes api responses, which will cause the scheduler to hang.
The timeout is specified as [connect timeout, read timeout]
kube_client_request_args =
Specifies the uid to run the first process of the worker pods containers as
run_as_user = 50000
Specifies a gid to associate with all containers in the worker pods
if using a git_ssh_key_secret_name use an fs_group
that allows for the key to be read, e.g. 65533
fs_group =
[kubernetes_node_selectors]
The Key-value pairs to be given to worker pods.
The worker pods will be scheduled to the nodes of the specified key-value pairs.
Should be supplied in the format: key = value
[kubernetes_annotations]
The Key-value annotations pairs to be given to worker pods.
Should be supplied in the format: key = value
[kubernetes_environment_variables]
The scheduler sets the following environment variables into your workers. You may define as
many environment variables as needed and the kubernetes launcher will set them in the launched workers.
Environment variables in this section are defined as follows
<environment_variable_key> = <environment_variable_value>
#
For example if you wanted to set an environment variable with value
prod
and keyENVIRONMENT
you would follow the following format:ENVIRONMENT = prod
#
Additionally you may override worker airflow settings with the
AIRFLOW__<SECTION>__<KEY>
formatting as supported by airflow normally.
[kubernetes_secrets]
The scheduler mounts the following secrets into your workers as they are launched by the
scheduler. You may define as many secrets as needed and the kubernetes launcher will parse the
defined secrets and mount them as secret environment variables in the launched workers.
Secrets in this section are defined as follows
<environment_variable_mount> = <kubernetes_secret_object>=<kubernetes_secret_key>
#
For example if you wanted to mount a kubernetes secret key named
postgres_password
from thekubernetes secret object
airflow-secret
as the environment variablePOSTGRES_PASSWORD
intoyour workers you would follow the following format:
POSTGRES_PASSWORD = airflow-secret=postgres_credentials
#
Additionally you may override worker airflow settings with the
AIRFLOW__<SECTION>__<KEY>
formatting as supported by airflow normally.
[kubernetes_labels]
The Key-value pairs to be given to worker pods.
The worker pods will be given these static labels, as well as some additional dynamic labels
to identify the task.
Should be supplied in the format:
key = value