Closed earthquakesan closed 1 month ago
This still appears to be an issue. I've seeing it currently.
I tried to read around, google a bit, but haven't come up with much...and I'm not sure how to fix or debug this. ANY help would be much appreciated.
In ArgoCD, the Error being displayed is:
Failed sync attempt to fa68c5c19c15882e88f303478b91b9cabbec7d39: one or more objects failed to apply, reason: CustomResourceDefinition.apiextensions.k8s.io "applicationsets.argoproj.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
I've adapted my approach by following the same pattern that argocd-autopilot takes with its bootstrap method...with my own slight modifications.
This is where the code lives: https://github.com/armenr/5thK8s/tree/main/dependencies/bootstrap
After installing and configuring argo-cd, this is the only file I kubectl apply -f in order to "bootstrap" all the other ArgoCD projects and apps:
https://github.com/armenr/5thK8s/blob/main/dependencies/bootstrap/autopilot-bootstrap.yaml
I have argocd : v2.4.0+a67b97d
I don't have that cm in my cluster
root@lnx-kub04:/tmp# kubectl -n global get cm
NAME DATA AGE
ingress-controller-leader 0 123d
ingress-nginx-global-ingressnginx-controller 2 36m
kube-resource-report-global-nginx 2 36m
kube-root-ca.crt 1 123d
metallb-global-config 1 36m
monitoring-stack-global-confluent-open-source-grafana-dashboard 1 35m
monitoring-stack-global-grafana 2 35m
monitoring-stack-global-grafana-config-dashboards 1 35m
monitoring-stack-global-grafana-test 1 35m
monitoring-stack-global-k8s-persistence-volumes 1 35m
monitoring-stack-global-ku-alertmanager-overview 1 35m
monitoring-stack-global-ku-apiserver 1 35m
monitoring-stack-global-ku-cluster-total 1 35m
monitoring-stack-global-ku-controller-manager 1 35m
monitoring-stack-global-ku-etcd 1 35m
monitoring-stack-global-ku-grafana-datasource 1 35m
monitoring-stack-global-ku-k8s-coredns 1 35m
monitoring-stack-global-ku-k8s-resources-cluster 1 35m
monitoring-stack-global-ku-k8s-resources-namespace 1 35m
monitoring-stack-global-ku-k8s-resources-node 1 35m
monitoring-stack-global-ku-k8s-resources-pod 1 35m
monitoring-stack-global-ku-k8s-resources-workload 1 35m
monitoring-stack-global-ku-k8s-resources-workloads-namespace 1 35m
monitoring-stack-global-ku-kubelet 1 35m
monitoring-stack-global-ku-namespace-by-pod 1 35m
monitoring-stack-global-ku-namespace-by-workload 1 35m
monitoring-stack-global-ku-node-cluster-rsrc-use 1 35m
monitoring-stack-global-ku-node-rsrc-use 1 35m
monitoring-stack-global-ku-nodes 1 35m
monitoring-stack-global-ku-persistentvolumesusage 1 35m
monitoring-stack-global-ku-pod-total 1 35m
monitoring-stack-global-ku-prometheus 1 35m
monitoring-stack-global-ku-proxy 1 35m
monitoring-stack-global-ku-scheduler 1 35m
monitoring-stack-global-ku-statefulset 1 35m
monitoring-stack-global-ku-workload-total 1 35m
monitoring-stack-global-node-problem-detector-custom-config 0 35m
monitoring-stack-global-op-cstor-overview 1 35m
monitoring-stack-global-op-cstor-pool 1 35m
monitoring-stack-global-op-cstor-volume 1 35m
monitoring-stack-global-op-cstor-volume-replica 1 35m
monitoring-stack-global-op-jiva-volume 1 35m
monitoring-stack-global-op-localpv-workload 1 35m
monitoring-stack-global-op-lvmlocalpv-pool 1 35m
monitoring-stack-global-op-ndm 1 35m
monitoring-stack-global-op-npd-node-volume-problem 1 35m
monitoring-stack-global-op-zfslocalpv 1 35m
prometheus-monitoring-stack-global-ku-prometheus-rulefiles-0 34 35m
root@lnx-kub04:/tmp#
I'm not able to delete the resource from the UI
the desired manifest is too big for the UI
the sync didn't work, but if I use the --replace flag it works. I'll use that as a workaround.
Hi @crenshaw-dev , I see that you have re-opened the issue. Any reason ?
I was testing this feature for one of the users and everything is working as expected.
This is the dummy configmap which I used for testing. https://github.com/iam-veeramalla/argocd-example-apps/tree/master/large-cm
Steps:
Install OpenShift-GitOps operator v1.6.0
Create the Argo CD Application as shown below. This will deploy the Configmap with 215.25 KB of JSON data which is usually not allowed using kubectl apply
.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dummy-large-cm
namespace: openshift-gitops
spec:
destination:
namespace: openshift-gitops
server: 'https://kubernetes.default.svc'
project: default
source:
path: large-cm
repoURL: 'https://github.com/iam-veeramalla/argocd-example-apps'
syncPolicy:
automated: {}
syncOptions:
- Replace=true
The one that's doing the magic is
syncOptions:
- Replace=true
Reopened because of @armenr's comment. Since things are looking okay @iam-veeramalla, I'll close again. @armenr lmk if you want to add more details.
Checklist:
argocd version
.Describe the bug
Synchronization for ConfigMaps over 262144 bytes does not work, when Replace=true flag is specified. Related issues: #5704 #820
To Reproduce
Tested on minikube:
Run the following steps to reproduce:
Open http://localhost:8080 in the browser. Login with "admin" and password you got earlier. Open the application, you will see that it failed to synchronize because of:
ConfigMap "rancher-monitoring-crd-manifest" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
Expected behavior
Synchronization for ConfigMaps over 262144 bytes works, when Replace=true flag is specified.
Screenshots
Version
Affected versions (helm chart - argocd version):
Not affected versions (helm chart - argocd version):
The regression is introduced between v2.0.5 and v2.1.0 releases.
Logs