Closed alexbegg closed 4 years ago
@alexbegg, I have fixed this in an upcoming release
@alexbegg, this has been fixed in 7.1.0
This is still happening in the 7.1.0 and 7.1.5 releases
Looks like you folks fixed in 7.1.6
I get the same error when running -
kubectl exec -it --namespace tamagotchi-orchestration service/gita-temp1-web -- bash -c "airflow list_dags"
or airflow create_user
I upgraded to 7.1.6 but I get the same error
Yeah it works when I go straight into the shell and properly interprets the CeleryExecutor is in use, but running an exec
command interprets airflow using sqlite
The workaround i did is adding to the command - source /home/airflow/airflow_env.sh
so that it will load the correct env vars
This is caused bykubectl exec
not creating a login bash
shell and thus not sourcing .bashrc
. However, we can fix this by replacing our existing approach of using templated AIRFLOW__CORE__SQL_ALCHEMY_CONN
with AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD
.
This is caused by
kubectl exec
not creating a loginbash
shell and thus not sourcing.bashrc
. However, we can fix this by replacing our existing approach of using templatedAIRFLOW__CORE__SQL_ALCHEMY_CONN
withAIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD
.
When would this change take place in the official release?
@sungchun12
FYI, you can just use the method documented in the README.md, and run the following to get an interactive bash shell:
# use this to run commands like: `airflow create_user`
kubectl exec \
-it \
--namespace airflow \
--container airflow-scheduler \
Deployment/airflow-scheduler \
/bin/bash
But I will fix the environment variables at some point.
@thesuperzapper Yep, I'm able to do that as a workaround. However, it'll be nice in a simple post helm deployment script to run in a remote shell vs. interactive one once you update the environment variable!
EDIT. chart version: airflow-7.8.0, app version: 1.10.12
I am having issues with the liveness probe related to this.
The variable is properly set as it prints the correct postgresql endpoint when kubectl exec -it bash
into the airflow-scheduler
container:
echo $AIRFLOW__CORE__SQL_ALCHEMY_CONN
But, the airflow-scheduler
container keeps restarting as it fails de liveness prove. kubectl describe pod <airfllow-scheduler-pod>
gives:
Warning Unhealthy 94s (x11 over 16m) kubelet, kube-node-0-kubelet.kube-dev.mesos Liveness probe failed: Traceback (most recent call last):
File "<string>", line 4, in <module>
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__init__.py", line 31, in <module>
from airflow.utils.log.logging_mixin import LoggingMixin
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/__init__.py", line 24, in <module>
from .decorators import apply_defaults as _apply_defaults
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/decorators.py", line 36, in <module>
from airflow import settings
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/settings.py", line 37, in <module>
from airflow.configuration import conf, AIRFLOW_HOME, WEBSERVER_CONFIG # NOQA F401
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/configuration.py", line 731, in <module>
conf.read(AIRFLOW_CONFIG)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/configuration.py", line 421, in read
self._validate()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/configuration.py", line 213, in _validate
self._validate_config_dependencies()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/configuration.py", line 247, in _validate_config_dependencies
self.get('core', 'executor')))
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the CeleryExecutor
@Arefer you are correct, I have raised an issue for this #23804
I have now fixed this issue in 7.10.0
Describe the bug When running Airflow CLI commands such as the
airflow create_user
command it beings up an error sayingcannot use sqlite with the CeleryExecutor
even when I am using the default "postgresql: enabled: true" and default "executor: CeleryExecutor".Version of Helm and Kubernetes: Helm: 3.2.1 Kubernetes: 1.15.7
Which chart: stable/airflow version 7.0.1
What happened: I have "AIRFLOWWEBSERVERRBAC: True" which requires me to create a user before I can log in. When accessing the shell of the "airflow-web" container and attempting to create a user via the
airflow create_user
cli command it brings up an errorI have "postgresql" as "enabled: true" so it should be using postgresql not sqlite. This happened with any other cli command such as "airflow list_dags" as well.
After trial and error it seems that explicitly setting
AIRFLOW__CORE__SQL_ALCHEMY_CONN
as part of the "airflow: config" values to the default end-result value ofpostgresql+psycopg2://postgres:airflow@airflow-postgresql:5432/airflow
allows for Airflow CLI commands such asairflow create_user
to work, but this shouldn't be necessary. Airflow CLI commands should work with the default chart values.What you expected to happen: After running an Airflow CLI command such as "airflow list_dags" it should not error because
AIRFLOW__CORE__SQL_ALCHEMY_CONN
should be set as an actual environment variable of the containerHow to reproduce it (as minimally and precisely as possible):
kubectl exec -t -I AIRFLOW-WEB_POD_NAME -c airflow-web bash
airflow list_dags
Anything else we need to know: It appears the problem is that
export AIRFLOW__CORE__SQL_ALCHEMY_CONN=
is set as part of the entrypoint "args" intemplates/deployments-web.yaml
. I think a solution would be to have this set as part of the "env" list but I don't know enough about Helm Charts to make the change myself. The same should go forAIRFLOW__CELERY__RESULT_BACKEND
andAIRFLOW__CELERY__BROKER_URL
as well.