microsoft / mindaro

Bridge to Kubernetes - for Visual Studio and Visual Studio Code
MIT License
307 stars 106 forks source link

Failed to get routing manager deployment status - ran out of time #259

Open geekyDev4 opened 2 years ago

geekyDev4 commented 2 years ago

Hi, I am getting following error while establishing routing to local from remote kubernetes cluster. This happens while redirecting requests in isolated mode.

Failed to get routing manager deployment status - ran out of time : 'Failed to get routing manager deployment status: 'Invalid value of trigger Correlation ID when contacting support: '3c23a777-7676-4cc4-8e82-36b1fff2216b1639638995556:146ef2199c0f'

However, if i try without the isolation mode, the service runs in debug mode, but there is no redirection taking place to my local service. I have a nodeport service in kubernetes cluster, when I try hitting the same using my virtual machine's ip, the same is redirected to the pod inside the cluster instead of my local service. Kindly help.

lolodi commented 2 years ago

Hi @geekyDev4, when you run without isolation the Bridge to Kubernetes agent should replace the container running your application logic. Could you verify that after establishing the connection the original pod is terminated and there is a new one running the lpk-agent?

amolerca commented 2 years ago

Hi there,

I'm experiencing the same problem, via the VS Code extension (v1.0.120220125). When running with no isolation, everything works fine. When trying to isolate, though, I get the error

Failed to establish a connection. Error: Failed to get routing manager deployment status - ran out of time : 'Failed to get routing manager deployment status: 'Invalid value of trigger'
Please include the following Correlation ID when contacting support: '11a7b582-25c4-4b5d-881b-95c38b96de211646069729104:7af7a977cd28'.

On the cluster side, I observe that when running with no isolation, the original pod instance gets replaced with a pod running the image bridgetokubernetes.azurecr.io/lpkremoteagent:0.1.7. This replacement takes place inside the same deployment of the original pod. On the other hand, when running on isolation mode, the original pod does not get replace, but a new pod gets instantiated apart from the original deployment. The name of this new pod includes the name of my local machine. Also, there's no service attached to it.

Here is my task.json file

{
    "tasks": [
        {
            "label": "bridge-to-kubernetes.resource",
            "type": "bridge-to-kubernetes.resource",
            "resource": "<resource>",
            "resourceType": "service",
            "ports": ["<port>"],
            "targetCluster": "<targetCluster>",
            "targetNamespace": "<targetNamespace>",
            "useKubernetesServiceEnvironmentVariables": true,
            "isolateAs": "amolerca"
        }
   ]
}