Closed rayanebel closed 4 years ago
Most likely I have the same problem. I set up a new app with manual syncing and after the first run i got the same result: It is succesfully synced, but with OutOfSync:
Events:
But an important note: After clicking manually "sync" now everything is OK. So now it is working for me, but the initial behaviour is very irritating. Thanks.
Update: after the comment from paulcrecan: The same happened to me (but I didnt write it here): After installing argocd, nothing was workgint at all regarding syncing. I then restarted the argocd-application-controller too. Then it was working as state as above. So there really seems to be in issue...
+1. I have the same problem as Ryan presented. One more thing to add is that when restarting the argocd-application-controller the applications are succesfully syncing.
Unfortunately this fix did not solve the issue presented above. Even if we upgraded to argocd:v1.4.2 version the issue persists.
Hello @paulcrecan ,
Can you please describe how did you reproduce the issue?
Hi @alexmt . Paul and I are working on the same team/project. We're currently running OKD v3.11 and running Kubernetes v1.11 . We've noticed several resources have inconsistent behaviour (mostly PVCs, Routes and Bitnami/SealedSecrets) and they appear to be out of sync. In the ArgoCD UI the resources seem like they aren't present in the cluster at all, so not just a diffing customization issue, but upon a closer inspection both ArgoCD and Openshift recognize the resources are present/synced.
As @paulcrecan mentioned, if we restart the argocd-application-controller pod, the resources will be fully in sync for a short period of time (under an hour) regardless if the application was just instantiated or it was present in Argo before. We've also increased the limits on the controller but to no avail. Our current config for the pod is as follows:
`- argocd-application-controller
If any other details are necessary, please let me know!
Sorry @isabo-lola , @paulcrecan . I did not notice your comment and forgot to follow up. Are you still experiencing this issue? It might be much easier to sync up in slack.
Should be fixed with later Argo CD v1.6+
hi @jessesuen
I use v1.7.6 version, and still have the same issue. Do I need to update something ?
Had this issue in 1.7.10, but it turned out to be caused by a startupProbe
config in the deployment which is not available until a K8s 1.17 -> 1.18 upgrade. It looks like kubectl quietly ignored this, while argoCD noticed the difference.
Same problem here using ArgoCD 1.7.6.
Any news about how to solve this problem?
Thanks
Solved.
Added the namespace i want to argocd manage on the secret that argocd uses to know what namespace can managed.
I have same problem in v1.8.7. We observed this issue after we upgrade Argocd version from v1.7 to v1.8.
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
Below is the example deployment file
kind: "DeploymentConfig"
apiVersion: "apps.openshift.io/v1"
metadata:
name: "frontend"
spec:
template:
metadata:
labels:
name: "frontend"
spec:
containers:
- name: "helloworld"
image: "openshift/hello-openshift"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 1
triggers:
- type: "ConfigChange"
strategy:
type: "Rolling"
paused: false
revisionHistoryLimit: 2
minReadySeconds: 0
I'm having exactly this issue. I'm not quite sure can we use the diff customization to just ignore the startupProbe
?
I have the similar issues with v2.1.6+a346cf9
Similar issue with v2.2.1+122ecef, although sometimes I have to wait for few minutes after the (manual) sync finished before seeing the resources as Synced (even if the diffs shows nothing). This is only for remote clusters.
I have the similar issues with v2.1.6+a346cf9
I fixed the problem, mostly is around k8s ha scaler re-ordering which has been mentioned somewhere I found when googling - that is k8s auto re-range the condition based on key.
Another time happened is the mismatch between docker tag and image tag, so it is not really argocd problem (in my case)
does anyone got a fix for this?
we are seeing our deployments synced on kubernetes after a tag change but argo ui is showing out of sync for a minute or so and then gets synced
does anyone got a fix for this?
we are seeing our deployments synced on kubernetes after a tag change but argo ui is showing out of sync for a minute or so and then gets synced
I think you better check if there is anything outside argocd trying to update the live k8s object which makes it unsynced. I have just experienced one more thing (in addition to the HPA ordering issue above) that is the resourceVersion - if this is (wrongly) set in the metadata section of a service, when deploy k8s will auto change the value to be different one and argocd will keep syncing forever (well until it gives up)
One way to check is to view the live manifest and desired manifest in argocd ui, and the diff.
I'm having exactly this issue. I'm not quite sure can we use the diff customization to just ignore the
startupProbe
?
I also encountered this problem recently, so I am posting it for your reference.
According to Diffing Customization, we can add the following diffing customization in argocd-cm
ConfigMap:
# in my case to ignore all startupProbe in statefulset at system-level
resource.customizations.ignoreDifferences.apps_StatefulSet: |
jqPathExpressions:
- '.spec.template.spec.containers[]?.startupProbe'
One way to check is to view the live manifest and desired manifest in argocd ui, and the diff.
I think that is the real issue, there isn't any indication as to why it isn't syncing. If there is a startup probe issue, or docker container, or any of the other potential conflicts people have mentioned above then ArgoCD should give some indication as to what it is (v2.9.3+6eba5be
).
In my case I can see it is timestamps:
creationTimestamp: '2024-01-15T15:26:40Z'
generation: 1
But I had to look at the Live and Desired and compare them myself to notice it.
Checklist:
argocd version
.Hello everyone,
I deployed
argocd
1.4 in our Openshift cluster and I have in this cluster applications deployed withhelm 3
. I tried to report all of our applications intoargocd
. Considering thatargocd
is not supporting helm v3 api and after some research on my side I changed on my charts apiVersion to usev1
.After that, I've created my application on
argocd
and I configured it to point to my helm chart (which are in a git repository). My application is created and state is nowOutOfSynch
. So now, when I clicked onSynchronize
,argocd
does is job and return a state : OK for synchronisation but, my application still in OutOfSynch/Missing.I tried from ui to delete some resource and synchronize again but same result.
Can someone help me to understand what's going on ? I'm really blocked and I can't use
argocd
.UPDATE: When I deploy my application on the namespace where argocd is running It's working I can see my application Healthy and Synch but now, when I do the same but in ANOTHER NAMESPACE, it's not working. It's so weird.
I'm trying to deploy in my local cluster. I've created a new cluster with
kubernetes.default.svc
to replace the default one to setup it with anamespaced scope
and I create a dedicated ServiceAccount and set in the configuration the according token. Then as I'm in openshift I give to my new ServiceAccountadmin
role. Finally for each namespace I create a Rolebinding which bind myadmin role
and serviceaccountsystem:serviceaccount:<NAMESPACE_OF_SA>:<DEST_NAMESPACE>
argocd
is able to make all it'skubectl apply
commands on this destination namespace but UI seems to not be able to fetch the resources and I don't know why. Does it a RBAC problem or something else ? What can I do to troubleshoot this problem because I'm stuck ?To Reproduce
Chart.yaml
to v1Expected behavior
UI should show me my application as synchronized and healthy.
Screenshots
Version
Logs