Open damian-krak opened 3 years ago
Thanks @damian-krak for opening this issue. I looked quickly at the helm charts you shared, and indeed nothing looks wrong to me. From what you describe, it sounds indeed like for some reasons our remote agent doesn't run properly, and so the restoration job fulfills its purpose: restoring the cluster in a correct state when it detects something is wrong.
Now, we need to understand why the remote agent doesn't start/run properly.
apigateway-backoffice-service
, we are going to find the pod corresponding to this service, and we'll either replace it directly with our own remote agent pod (non-isolated mode), or create a new pod next to the original one with our remote agent (isolated mode).
If you describe this pod, you should be able to validate that its our remote agent (thanks to its lpkremoteagent
Docker image), and see if there are any reasons why it doesn't run.
If the remote agent does run, then could you please retrieve the logs for this pod?
Describe the bug I am unable to start debugging my service deployed to aks in Visual studio 2019. No error message is shown - only Connecting to cluster window is visible for a very long time.
Logs In -deployment-restore- pod I can find following line multiple times:
At the end there is:
Additional context I'm using Visual Studio 2019. Everything works fine when I'm replacing for example kibana service deployed to my cluster. It doesn't work only to services deployed using my helm charts. They are quite simple (see attachments) helm.zip Could you guide me on what can be wrong with that helm chart/deployment that is causing the problem?