kubernetes / enhancements

Enhancements tracking repo for Kubernetes
Apache License 2.0
3.39k stars 1.46k forks source link

ApplySet : `kubectl apply --prune` redesign and graduation strategy #3659

Open KnVerey opened 1 year ago

KnVerey commented 1 year ago

Enhancement Description

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.

/sig cli

soltysh commented 1 year ago

/assign @KnVerey /stage alpha /milestone v1.27 /label lead-opted-in

marosset commented 1 year ago

Hello @KnVerey 👋, Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00 PDT Thursday 9th February 2023.

This enhancement is targeting for stage alpha for v1.27 (correct me, if otherwise)

Here's where this enhancement currently stands:

For this enhancement, it looks like https://github.com/kubernetes/enhancements/pull/3661 will address most of these requirements. Please be sure to also:

The status of this enhancement is marked as at risk. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

KnVerey commented 1 year ago

@marosset I believe all of the requirements have been met with the merging of #3661 today!

marosset commented 1 year ago

This enhancement meets now meets all of the requirements to be tracked in v1.27. Thanks!

KnVerey commented 1 year ago

Docs placeholder PR: https://github.com/kubernetes/website/pull/39818

marosset commented 1 year ago

Hi @KnVerey :wave:,

Checking in as we approach 1.27 code freeze at 17:00 PDT on Tuesday 14th March 2023.

Please ensure the following items are completed:

Please let me know if there are any PRs in k/k I should be tracking for this KEP.

As always, we are here to help should questions come up. Thanks!

LukeMwila commented 1 year ago

Hi @KnVerey, I’m reaching out from the 1.27 Release Docs team. This enhancement is marked as ‘Needs Docs’ for the 1.27 release.

Please follow the steps detailed in the documentation to open a PR against dev-1.27 branch in the k/website repo. This PR can be just a placeholder at this time, and must be created by March 16. For more information, please take a look at Documenting for a release to familiarize yourself with the documentation requirements for the release.

Please feel free to reach out with any questions. Thanks!

marosset commented 1 year ago

Unfortunately the implementation PRs associated with this enhancement have not merged by code-freeze so this enhancement is getting removed from the release.

If you would like to file an exception please see https://github.com/kubernetes/sig-release/blob/master/releases/EXCEPTIONS.md

/milestone clear /remove-label tracked/yes /label tracked/no

KnVerey commented 1 year ago

Hi @marosset they did make the release actually! You can see them here: https://github.com/orgs/kubernetes/projects/128/views/2. I will update the issue description. The only feature we were originally targeting as part of the first alpha that did not make it was kubectl diff support. The featureset in kubectl apply is complete as intended for this release.

marosset commented 1 year ago

/milestonve v1.27 /label tracked/yes /remove-label tracked/no

marosset commented 1 year ago

Hi @marosset they did make the release actually! You can see them here: https://github.com/orgs/kubernetes/projects/128/views/2. I will update the issue description. The only feature we were originally targeting as part of the first alpha that did not make it was kubectl diff support. The featureset in kubectl apply is complete as intended for this release.

@KnVerey I added this issue back into v1.27. Thanks for linking all the PRs above!

sftim commented 1 year ago

BTW, nearly all the labels we register are using subdomains of kubernetes.io. This KEP is using *.k8s.io keys.

If you want to make life easier for end users, get an exception in to change the labels, before beta (ideally, before the alpha release). I know it's a bit later, but it looks like we missed that detail in earlier reviews.

See https://kubernetes.io/docs/reference/labels-annotations-taints/ for the list of registered keys that we use for labels and annotations.

KnVerey commented 1 year ago

/milestone v1.27

(there was a typo in the last attempt to apply this)

Sakalya commented 1 year ago

@KnVerey is there a way I can contribute to this ?

KnVerey commented 1 year ago

is there a way I can contribute to this ?

Yes, we'll have plenty of work to do on this for v1.28! Some of it still needs to be defined through KEP updates before it can be started though. Please reach out in the sig-cli channel on Kubernetes Slack.

KnVerey commented 1 year ago

/assign @justinsb

uhthomas commented 1 year ago

Hi!

I'm looking to use applysets and struggling to understand how to use them at the cluster scope.

The KEP seems to suggest that --applyset=namespace/some-namespace should be possible, though I don't believe it is as the source code seems to explicitly only allow configmaps, secrets and CRDs. See the example:

kubectl apply -n myapp --prune --applyset=namespaces/myapp -f .

My use case is that I apply a big v1/List with everything in it.

apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: ConfigMap
  data: {}
- ...

I get this error:

$ /usr/local/bin/kubectl --kubeconfig= --cluster= --context= --user= apply --server-side --applyset=automata --prune -f -
error: namespace is required to use namespace-scoped ApplySet
uhthomas commented 1 year ago

So, I ended up making a custom resource specifically for the ApplySet, but actually getting it to work is tricky.

kubectl can't create the custom resource

So, unlike with ConfigMaps and Secrets, kubectl cannot create the custom resource.

error: custom resource ApplySet parents cannot be created automatically

Missing tooling annotation

The annotation applyset.kubernetes.io/tooling must be set to kubectl/v1.27.1:

error: ApplySet parent object "applysets.starjunk.net/automata" already exists and is missing required annotation "applyset.kubernetes.io/tooling"

Missing ApplySet ID label

So, now I have to replicate this by hand?...

Sure, here's a go.dev/play.

error: ApplySet parent object "applysets.starjunk.net/automata" exists and does not have required label applyset.kubernetes.io/id

Missing contains-group-resources annotation

The value of this annotation will be tedious to replicate by hand. Fortunately, it can be blank.

error: parsing ApplySet annotation on "applysets.starjunk.net/automata": kubectl requires the "applyset.kubernetes.io/contains-group-resources" annotation to be set on all ApplySet parent objects

Server-Side conflicts

It looks like because I had to create those fields manually, and I did so with server-side apply, there are now conflicts which need to be resolved. The fix is to defermanagement of those fields to kubectl, see here.

error: Apply failed with 1 conflict: conflict with "kubectl-applyset": .metadata.annotations.applyset.kubernetes.io/tooling
statefulset.apps/vault serverside-applied
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts

After that was all said and done, it looks like this now works as expected!

https://github.com/uhthomas/automata/actions/runs/4942497931

I really hope my feedback is helpful. Let me know if there's anything I can do to help.

uhthomas commented 1 year ago

Also, not sure if it's relevant but there are lots of warnings of throttling.

I0511 00:16:48.924009    2333 request.go:696] Waited for 1.199473039s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/apps/v1/namespaces/vault/statefulsets?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:16:59.124001    2333 request.go:696] Waited for 11.398419438s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/objectbucket.io/v1alpha1/namespaces/media/objectbucketclaims?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:09.124063    2333 request.go:696] Waited for 21.397443643s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/apps/v1/namespaces/vault-csi-provider/daemonsets?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:19.124416    2333 request.go:696] Waited for 31.397017627s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/secrets-store.csi.x-k8s.io/v1/namespaces/vault-csi-provider/secretproviderclasses?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:29.324390    2333 request.go:696] Waited for 41.596456299s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/ceph.rook.io/v1/namespaces/node-feature-discovery/cephobjectstores?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:39.524221    2333 request.go:696] Waited for 51.795742479s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/crdb.cockroachlabs.com/v1alpha1/namespaces/rook-ceph/crdbclusters?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:49.723903    2333 request.go:696] Waited for 1m1.994821913s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/api/v1/namespaces/mimir/serviceaccounts?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:59.724367    2333 request.go:696] Waited for 1m11.994827196s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/api/v1/namespaces/grafana-agent-operator/services?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:09.924321    2333 request.go:696] Waited for 1m22.194264157s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/rbac.authorization.k8s.io/v1/namespaces/snapshot-controller/roles?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:20.123847    2333 request.go:696] Waited for 1m32.393300823s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/monitoring.grafana.com/v1alpha1/namespaces/immich/logsinstances?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:30.124466    2333 request.go:696] Waited for 1m42.393384018s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/rbac.authorization.k8s.io/v1/namespaces/cert-manager/rolebindings?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:40.324001    2333 request.go:696] Waited for 1m52.592402588s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/apps/v1/namespaces/snapshot-controller/deployments?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:50.324112    2333 request.go:696] Waited for 2m2.59200303s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/ceph.rook.io/v1/namespaces/rook-ceph/cephfilesystems?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:19:00.[523](https://github.com/uhthomas/automata/actions/runs/4942497931/jobs/8836077430#step:8:524)616    2333 request.go:696] Waited for 2m12.790972393s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/policy/v1/namespaces/vault-csi-provider/poddisruptionbudgets?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
uhthomas commented 1 year ago

This may also be worth thinking about: https://github.com/spotahome/redis-operator/issues/592. In some cases, it can lead to data loss. I'm not sure if this is any worse than the original implementation of prune, to be fair.

btrepp commented 1 year ago

I think the examples list namespaces as current potential apply set parents, however at the moment the tooling doesn't allow that. The examples say errors that it isn't allowed. Mainly I thought this might be a natural place for a very declarative approach. E.g the apply set covers the entire namespace, add to the apply set to add more resources.

I think also, while I completely understand and agree with 'apply set should only change one namespace' in practice this makes it a bit tricky as common tools do seem to span multiple namespaces quite often. E.g https://github.com/cert-manager/cert-manager/issues/5471. For cert-manager I usually patch it to not affected kube-system, but it gets confusing quickly :).

So from above, I've pretty quickly hit the 'now I have to create a my own CRD', to have the additional namespaces capability. I think if namespaces where allowed to be parents (and as they appear to be more cluster scoped) then they could span multiple namespaces, (but one is the parent/managing one), that would improve UX.

It also appears that a namespace parent (e.g a secret) can't span multiple namespaces, so if you do need to change 2x namespaces, you need a cluster resource anyway.

Despite some understandable alpha hiccups, it's actually pretty use-able! though. I'd say best UX at the moment is to heavily use it with Kustomize, so you can wrangle other software into working with it.

schlichtanders commented 11 months ago

@btrepp @uhthomas I would like to transition to applysets, however face the namespace problem - you seem to have created custom CRDs which could be used as a applyset. Unfortunately I couldn't find the respective resources.

Do you or someone else know of plug and play applyset CRDs which can be used for seamless cluster-wide pruning?

~EDIT: @uhthomas, I found this commit by you which seems to suggest that you could successfully simplicy the setup by using some kubectl commands. Unfortunately I couldn't find the corresponding commands. Can you help?~ Asked separately below

uhthomas commented 11 months ago

@schlichtanders I believe this comment should have everything you need? Let me know if there's more I can do to help.

https://github.com/kubernetes/enhancements/issues/3659#issuecomment-1542965531

schlichtanders commented 11 months ago

@uhthomas, I found this commit by you which seems to suggest that you could successfully simplified the setup by using some kubectl commands. Unfortunately I couldn't find the corresponding commands. Can you help?

uhthomas commented 11 months ago

@schlichtanders To be clear, there are no kubectl commands which simplify this setup. You must create a CRD and custom resource as expalined in my other comment. You then must follow what I've written to create the appropriate annotations and labels for the custom resource, which can be removed later as kubectl will take over. The only command which is run for all of this is KUBECTL_APPLYSET=true kubectl apply --server-side --force-conflicts --applyset=applyset/automata --prune -f list.json.

schlichtanders commented 11 months ago

thank you thomas for the clarification 🙏

I now compiled my applyset.yaml as follows from the help of your comment:

# for details on the annotation see https://kubernetes.io/docs/reference/labels-annotations-taints/
# the applyset.kubernetes.io/id is depending on the group, however kubectl will complain and show you the correct id to use anyway

apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
  name: "applysets.jolin.io"
  labels:
    applyset.kubernetes.io/is-parent-type: "true"
spec:
  group: "jolin.io"
  names:
    kind: "ApplySet"
    plural: "applysets"
  scope: Cluster
  versions:
  - name: "v1"
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: "object"

---

apiVersion: "jolin.io/v1"
kind: "ApplySet"
metadata:
  name: "applyset"
  annotations:
    applyset.kubernetes.io/tooling: "kubectl/1.28"
    applyset.kubernetes.io/contains-group-resources: ""
  labels:
    applyset.kubernetes.io/id: "applyset-TFtfhJJK3oDKzE2aMUXgFU1UcLI0RI8PoIyJf5F_kuI-v1"

I need to deploy the above yaml first (EDIT: I need to repeat this a couple of times, because the second yaml part requires the first crd part to be available, which takes a moment)

kubectl apply --server-side --force-conflicts -f applyset.yaml

and can then run kubectl with applyset, similar to how you mentioned:

KUBECTL_APPLYSET=true kubectl apply --server-side --force-conflicts --applyset=applyset.jolin.io/applyset --prune -f my-k8s-deployment.yaml

Seems to work so far :partying_face:

Note:

For more up-to-date information on all the annotation, see https://kubernetes.io/docs/reference/labels-annotations-taints/

uhthomas commented 11 months ago

Glad you were able to get it working.

I also mentioned this in my original comment, but the ID is generated here and can be generated in-browser with this program I wrote. Good to know it tells you what it should be anyway, so I guess trial and error works too.

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

armingerten commented 7 months ago

The only feature we were originally targeting as part of the first alpha that did not make it was kubectl diff support. The featureset in kubectl apply is complete as intended for this release.

Is there already a timeline when ApplySets will be supported by kubectl diff? There also seems to be another stale issue about this: https://github.com/kubernetes/kubectl/issues/1435

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 5 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/enhancements/issues/3659#issuecomment-2066604718): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
btrepp commented 5 months ago

Is this really not planned now?. That's kinda disappointing, it was a really good feature and was looking forward to it.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/enhancements/issues/3659#issuecomment-2208837556): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
btrepp commented 2 months ago

Could the bot be disabled?. This is probably a long term issue and the bot closing it, is just making noise in the issue.

If applysets aren't going forward that's okay and the issue should be closed, but the bot is adding no value on this thread

sftim commented 2 months ago

/reopen

/lifecycle frozen My opinion: when we have in-tree code (alpha or later), we should freeze the KEP issue until that in-tree code is either stable or removed.

k8s-ci-robot commented 2 months ago

@sftim: Reopened this issue.

In response to [this](https://github.com/kubernetes/enhancements/issues/3659#issuecomment-2210460639): >/reopen > >/lifecycle frozen >My opinion: when we have in-tree code (alpha or later), we should freeze the KEP issue until that in-tree code is either stable or removed. Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
sftim commented 2 months ago

Oh, hang on. This never made it into the tree and nobody is driving it forward? /remove-lifeycle frozen /close not-planned

k8s-ci-robot commented 2 months ago

@sftim: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/enhancements/issues/3659#issuecomment-2210462413): >Oh, hang on. This never made it into the tree and nobody is driving it forward. >/remove-lifeycle frozen >/close not-planned Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
btrepp commented 2 months ago

Oh, hang on. This never made it into the tree and nobody is driving it forward. /remove-lifeycle frozen /close not-planned

Thanks for looking into it.

I know this command is available in the cli in an alpha state, and is on the k8s docs

https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#kubectl-apply-prune-1

Should it maybe be removed, or the docs updated if it isn't going to be developed?

It's a feature that solves some problems with managing what is being deployed, and was pretty usable in alpha. I know I adopted it in place of heavy operators like flux and argocd, so I imagine people will find the docs and use it, even though it is alpha.

uhthomas commented 2 months ago

Please don't remove applysets 😭

schlichtanders commented 2 months ago

I am also happily using applysets. The applyset alpha version so far is super stable and works way better then the previous --prune support. I am also using it as a really great lightweight alternative to flux/argocd.

sftim commented 2 months ago

Right, it is in tree. If we can't maintain it we'll have to find a way to make things work though; unmaintained code is a risk all in itself.

/lifecycle frozen /reopen

k8s-ci-robot commented 2 months ago

@sftim: Reopened this issue.

In response to [this](https://github.com/kubernetes/enhancements/issues/3659#issuecomment-2210940393): >Right, it _is_ in tree. If we can't maintain it we'll have to find a way to make things work though; unmaintained code is a risk all in itself. > >/lifecycle frozen >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
Barsonax commented 2 months ago

Would love to see apply sets getting some more love