argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
18.01k stars 5.49k forks source link

Support generateName for application resources #1639

Open jessesuen opened 5 years ago

jessesuen commented 5 years ago

A common request is to support generateName in resources. Although kubectl apply does not work with generateName, Argo CD could have behavior that when it sees a resource with generateName instead of name, it could decide to perform create instead.

Note that resources created in this manner, would immediately cause the application to be OutOfSync, since Argo CD would consider these as "extra" resources that need to be pruned. To mitigate this, the user could use this feature to prevent the extra resource from contributing to the overall OutOfSync condition of the application as a whole.

With this feature, Argo CD could be used to trigger job runs by simply performing a sync.

Some areas of concern:

  1. how would auto-sync behave with this feature
  2. if the resource is a Job or Workflow, the sync operation should probably not wait until those resources complete (unlike resource hooks).
  3. how would this work with hook-weights ?
  4. diffing will not work on generateName objects.

Also to note: it is already possible to have Argo CD create resources using generateName, but those resources need to use the argocd.argoproj.io/hook annotation. e.g.:

metadata:
  generateName: my-job-
  annotations:
    argocd.argoproj.io/hook: Sync

However, using resource hooks has the following limitations:

  1. Live hook objects are not considered part of the application, and thus not candidates for pruning. They will still be presented in the UI.
  2. With resource hooks, Job/Workflow/Pod objects will block a Sync operation from completing until the Job/Workflow/Pod completes. So very long lived jobs/pods/workflows would prevent new argocd app sync from occurring. This would be undesirable for someone who just wants to kick off the job asynchronously
kwladyka commented 5 years ago

What are reasons kubectl apply not work with generateName? I mean while it is not supported natively maybe there is good reason for it.

My use case for it: I want to create argocd Application which will have only 1 job. This job will be GitOps pipelines for Concourse (cicd tool). So after each change in pipelines and push to git it will run and be sure to update all pipelines configurations in Concourse. In this way I can achieve GitOps for pipelines.

Alternatively I can run this in Concourse to update Concourse pipelines ;) The border of where it should be done is abstractive ;)

jessesuen commented 5 years ago

What are reasons kubectl apply not work with generateName? I mean while it is not supported natively maybe there is good reason for it.

You can read the discussion here: https://github.com/kubernetes/kubernetes/issues/44501

The resolution was to document this limitation, rather than have kubectl apply handle generateName

jessesuen commented 5 years ago

I want to create argocd Application which will have only 1 job. This job will be GitOps pipelines for Concourse (cicd tool). So after each change in pipelines and push to git it will run and be sure to update all pipelines configurations in Concourse. In this way I can achieve GitOps for pipelines.

@kwladyka I think you can achieve your use case even today, by specifying a single Job with the Sync hook annotation, and no "normal" application resources.

jessesuen commented 5 years ago

Another important point for users interested in this feature, is that if you are using kustomize to manage configs, kustomize does not support generateName well. See:

https://github.com/kubernetes-sigs/kustomize/issues/586

wmedlar commented 5 years ago

kustomize does not support generateName well

I've been able to workaround this behavior, at least in Kustomize v1, by patching in generateName with the patchesJson6902 field:

# kustomization.yaml
resources:
- job.yaml

patchesJson6902:
- path: patches/job-generate-name.yaml
  target:
    group: batch
    version: v1
    kind: Job
    name: foo
# job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: foo
spec: ...
# patches/job-generate-name.yaml
- op: move
  from: /metadata/name
  path: /metadata/generateName

and finally the compiled manifests:

$ kustomize build
apiVersion: batch/v1
kind: Job
metadata:
  generateName: foo
spec: ...

Works like a charm so long as you don't try to modify the Job spec after the patch.

jessesuen commented 5 years ago

Great tip! I'm going to reference your workaround in the original kustomize bug I filed

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

so0k commented 4 years ago

as far as I can tell, ArgoCD supports jobs with generateName only if they have the special annotation, which tells Argo not to use kubectl apply but kubectl create instead and it seems the only way to add jobs as part of your application

lallinger-tech commented 4 years ago

For anybody stumbling across this and wondering which annotation you have to set refere to this: https://argoproj.github.io/argo-cd/user-guide/resource_hooks/

klausroo commented 3 years ago

kustomize does not support generateName well

I've been able to workaround this behavior, at least in Kustomize v1, by patching in generateName with the patchesJson6902 field:

This doesn't seem to work for me, I still get resource name may not be empty

I verified that my config is similar with yours.

huang195 commented 3 years ago

@jessesuen Is there any updates on this issue? I just tried to create a deployment using generateName, and using your suggestion of adding the following annotations:

  annotations:
    argocd.argoproj.io/hook: Sync

I see Argo CD is able to correctly using kubectl create to create the deployment in the cluster, but like what you said, it's not a candidate for pruning, which is problematic. When we delete this deployment resource in the repo, the expected behavior is to kubectl delete the deployment from the cluster as well. I wonder if there's a workaround for this problem?

Instead of the above annotation, I've also tried the following pair

metadata:
  generateName: fortio-
  annotations:
    argocd.argoproj.io/sync-options: Replace=true
    argocd.argoproj.io/compare-options: IgnoreExtraneous

The deployed was created in the cluster, but Argo CD was treating these as separate entities so the IgnoreExtraneous annotation probably didn't take any effect. However, I don't fully understand what these options do. Will any combinations of these annotations solve the problem?

dobesv commented 2 years ago

The workaround above works in kustomize 3.8.6 but not in the latest version 4.4.1, so I guess something changed in kustomize to break this.

queil commented 2 years ago

@dobesv Right, https://github.com/kubernetes-sigs/kustomize/issues/4224

24601 commented 2 years ago

We have a workaround that we are using with relatively good success that works on all versions of kustomize that support namesuffix and Argo. Check out https://github.com/kubernetes-sigs/kustomize/issues/641#issuecomment-1316367513 for details.

alfsch commented 3 days ago

Any progress on this?