Closed rohitagg2020 closed 2 years ago
cc: @voor we will be trying to analyse the app change in this issue
Hello, the app has not changed. Each app will create many seemingly useless configmaps, resulting in very slow access to the configmap API. Are there any plans for further optimization?
here is the diff from second deploy
@@ update customresourcedefinition/rabbitmqclusters.rabbitmq.com (apiextensions.k8s.io/v1) cluster @@
...
80, 80 spec:
81 - conversion:
82 - strategy: None
83, 81 group: rabbitmq.com
84, 82 names:
...
4530,4528 acceptedNames:
4531 - categories:
4532 - - all
4533 - kind: RabbitmqCluster
4534 - listKind: RabbitmqClusterList
4535 - plural: rabbitmqclusters
4536 - shortNames:
4537 - - rmq
4538 - singular: rabbitmqcluster
4539 - conditions:
4540 - - lastTransitionTime: "2022-01-05T20:46:40Z"
4541 - message: no conflicts found
4542 - reason: NoConflicts
4543 - status: "True"
4544 - type: NamesAccepted
4545 - - lastTransitionTime: "2022-01-05T20:46:40Z"
4546 - message: the initial names have been accepted
4547 - reason: InitialNamesAccepted
4548 - status: "True"
4549 - type: Established
4550 - storedVersions:
4551 - - v1beta1
4529 + kind: ""
4530 + plural: ""
4531 + conditions: []
4532 + storedVersions: []
4552,4533
this diff is due to following config on CRD in cluster-operator.yml:
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
kapp reads this as set status to following value which would reset what was populated in the cluster. you can either get rid of that portion of yaml manually or remove it via ytt overlay (or other overlaying tool). (this YAML gets generated )
Are there any plans for further optimization?
kapp deploy does have --app-changes-max-to-keep=X flag where X deafults to 200 configmap records to keep the history.
we should also probably see if we can have a default rebase rule that detects "empty" status for CRDs since this is a pretty common case (it's common because tools like kubebuilder unfortunately somehow generate empty status instead of not doing it).
This issue is being marked as stale due to a long period of inactivity and will be closed in 5 days if there is no response.
After looking at the various packages, we have figured out that one rebase rule can be added to the default set of rebase rules. (PR: https://github.com/vmware-tanzu/carvel-kapp/pull/411). The rest of the packages have to define their own rebase rules as they need custom-tailored rebase rules.
To replicate the issue, do a kapp deploy twice: