Closed espenthaem closed 3 weeks ago
I've also discovered I can force the migration job to run by disabling the Helm hooks on the migrationDataBaseJob:
migrateDatabaseJob:
useHelmHooks: false
enabled: true
The migrations jobs seems to complete the init and migration itself, but never shut downs
WARNING:root:OSError while attempting to symlink the latest log directory
DB: postgresql://postgres:***@airflow-postgresql.airflow-dags:5432/postgres?sslmode=disable
/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py:47 DeprecationWarning: `db init` is deprecated. Use `db migrate` instead to migrate the db and/or airflow connections create-default-connections to create the default connections
[2024-06-07T10:38:47.561+0000] {migration.py:216} INFO - Context impl PostgresqlImpl.
[2024-06-07T10:38:47.571+0000] {migration.py:219} INFO - Will assume transactional DDL.
[2024-06-07T10:38:48.137+0000] {migration.py:216} INFO - Context impl PostgresqlImpl.
[2024-06-07T10:38:48.137+0000] {migration.py:219} INFO - Will assume transactional DDL.
[2024-06-07T10:38:48.166+0000] {db.py:1623} INFO - Creating tables
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
[2024-06-07T10:38:49.226+0000] {task_context_logger.py:63} INFO - Task context logging is enabled
[2024-06-07T10:38:49.227+0000] {executor_loader.py:115} INFO - Loaded executor: KubernetesExecutor
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py:165 FutureWarning: The config section [kubernetes] has been renamed to [kubernetes_executor]. Please update your `conf.get*` call to use the new name
[2024-06-07T10:38:49.433+0000] {scheduler_job_runner.py:808} INFO - Starting the scheduler
[2024-06-07T10:38:49.434+0000] {scheduler_job_runner.py:815} INFO - Processing each file at most -1 times
[2024-06-07T10:38:49.436+0000] {kubernetes_executor.py:318} INFO - Start Kubernetes executor
[2024-06-07T10:38:49.514+0000] {kubernetes_executor_utils.py:157} INFO - Event: and now my watch begins starting at resource_version: 0
[2024-06-07T10:38:49.520+0000] {kubernetes_executor.py:239} INFO - Found 0 queued task instances
[2024-06-07T10:38:49.535+0000] {manager.py:169} INFO - Launched DagFileProcessorManager with pid: 37
[2024-06-07T10:38:49.548+0000] {scheduler_job_runner.py:1608} INFO - Adopting or resetting orphaned tasks for active dag runs
[2024-06-07T10:38:49.586+0000] {settings.py:60} INFO - Configured default timezone UTC
[2024-06-07T10:38:49.682+0000] {settings.py:541} INFO - Loaded airflow_local_settings from /opt/airflow/config/airflow_local_settings.py .
[2024-06-07T10:38:49.713+0000] {scheduler_job_runner.py:1631} INFO - Marked 3 SchedulerJob instances as failed
Initialization done
[2024-06-07T10:39:07.533+0000] {configuration.py:2066} INFO - Creating new FAB webserver config file in: /opt/airflow/webserver_config.py
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Access Logformat:
I've just realized I'm actually not using the community edition of Airflow helm chart. My bad. I'll close this.
Checks
User-Community Airflow Helm Chart
.Chart Version
1.13.1
Kubernetes Version
Helm Version
Description
I'm trying to deploy using Airflow but my scheduler, triggerer and webserver pods are forever stuck in wait-for-airflow-migrations to finish. However, a db-migrations job is never actually started.
I'm using a customer docker image to include my package requirements:
Relevant Logs
Custom Helm Values
I'm not using a
--wait
flag and I'm not deploying using ArgoCD. Here's my deploy statement: