Open filippomc opened 11 months ago
@filippomc imo this is a feature that we should really think through. For example what will happen if this job fails, can the "main" pod then still exist and run? Or how to handle dependencies like the main pod can only run if the "task" is finished
I would really like to see the user story here, the why do we need this
We have a case of data/user ingestion for a project with @alxbrd that we are planning to generalize. We have another case on another project where we need to ingest external data into the database on the first run. Definitely I wouldn't make a dependency such as a pod not being able to start because a task is not finished (or any other pod hasn't started). That would be totally against the base Kubernetes principles.
@filippomc postgresql has an option to ingest data on creation, you can use the entrypoint.sh
to run missing migration (Django has this)
I would solve this on the application framework level instead of depending on a job
sure but one thing does not exclude the other. Also Django model migrations wouldn't be a case for this
@filippomc imo this is at least until we have a good user story a very very low priority
As said, we already have a case on a project that we are aiming at generalizing here
@zsinnema I also think we should have something similar to create daemonsets listening and handling events. I don't like much events being handled inside service pods
Jobs are useful to do one time initializations, migrations, etc
Jobs should be defined similarly to deployments in the values.yaml
If
auto
is true, a job is created with the same context of the deployment (variables, volumes). Since a volume can be shared it is important to add pod affinity