Azure / Bridge-To-Kubernetes

Bridge To Kubernetes (B2K) is development tool to debug microservices, pods which redirects traffic to your local development machine and vice versa.
https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes
Other
212 stars 61 forks source link

Isolation with Nginx Ingress not work anymore!! #25

Closed letmagnau closed 2 years ago

letmagnau commented 2 years ago

referring to issue: https://github.com/microsoft/mindaro/issues/342

Describe the bug if you configure it for isolation, it ignore the service that is being debugging --- simply redirect all the request to the main deployments

You cannot debug anymore... simply, in the hosts file will write also the service that needs to be redirect to local...

To Reproduce

simply configure it for isolation and try to debug

Expected behavior needs to stop on brekpoint in local service

Logs there's no error , is the behavior the problmem , it ignores the isolation

Environment Details Client used (VS Code/Visual Studio): 1.70.1 6d9b74a70ca9c7733b29f0456fd8195364076dda x64 Client's version: mindaro.mindaro@1.0.120220811 Operating System: 5.18.16-1-MANJARO Additional context we are very sad...

I've recreated the cluster from scratch, behavior remains the same :( very sad..

This is critically blocking for us..

Please check the fix, isolation not work anynore, B2K ingore aliasing totally

regards

letmagnau commented 2 years ago

update:

i tried with sample project Todo, and it seems to work but simple project didn't use nginx as we using... Before the update , nginx frontend are working and now it's not

can you try with an application using nginx with v1 manifest?

regards

letmagnau commented 2 years ago

we have also this errors on routing manager deployment

│ 2022-08-22T11:11:02.8970493Z | RoutingManager | ERROR | Service port 'null' from ingress 'XXXXXXX' does not match any port on the service 'XXXXXXX'. │ │ 2022-08-22T11:11:02.8974018Z | RoutingManager | ERROR | Service port 'null' from ingress 'XXXXXXX' does not match any port on the service XXXXXXX'. │ │ 2022-08-22T11:11:02.8977052Z | RoutingManager | ERROR | Service port 'null' from ingress 'XXXXXXX' does not match any port on the service XXXXXXX'. │ │ 2022-08-22T11:11:02.8979925Z | RoutingManager | ERROR | Service port 'null' from ingress 'XXXXXXX' does not match any port on the service XXXXXXX

but all the services deployed have the service port defined

hsubramanianaks commented 2 years ago

@letmagnau found another thread where nginx ingress is working - https://github.com/microsoft/mindaro/issues/302#issuecomment-1219905746. May be something with version, we release latest B2K version (1.0.20220816.2) can you try with this and let us know ?

elenavillamil commented 2 years ago

From error above, bridge cannot get port information for your ingress, could you provide the output of running below command in your cluster (please send it privately iavf you would like to avoid sharing your service ingress configuration here): kubectl -n get ingressroutes -o json

It may also be worth updating the title / description of this issue. This is not generic isolation is not working. This seems specific to whatever nginx version or syntax you are using.

elenavillamil commented 2 years ago

Please also provide which version of nginx manifest you are using and which version of k8s you are using. v1 manifests should now be working.

letmagnau commented 2 years ago

@letmagnau found another thread where nginx ingress is working - microsoft/mindaro#302 (comment). May be something with version, we release latest B2K version (1.0.20220816.2) can you try with this and let us know ?

Hi @hsubramanianaks , I 've open that issue.... problem was that with V1 manifest , with older version, b2k not Create the ingress on isolation now that problem seems to be solved.

Now problem is that also if ingress are created , they dont redirect to local, also the isolation route traffic is managed from the main webservice

eg: I have two service on K8 cluster , A and B. I isolate A service with B2k . when the A service is stopped... it also continue to work from alias address... the traffic is managed from the main A service running yet

I hope that 's clear

letmagnau commented 2 years ago

hi @elenavillamil

I've renamed the issue, but I try to replicate the behavior better:

version b2k : 1.0.20220817 cluster version : 1.23.8

Nginx version : latest

kubectl -n get ingressroutes -o json --> no ingressroutes is found, we never had ingressroutes also when B2k works

we have 3 ingress , all of them are similar we use Helm to deploy on K8

one of this deployed is :+1: apiVersion: networking.k8s.io/v1 │ │ kind: Ingress │ │ metadata: │ │ annotations: │ │ cert-manager.io/cluster-issuer: XXXXXXXXX │ │ meta.helm.sh/release-name: eolo-webapp │ │ meta.helm.sh/release-namespace: synbee-dev │ │ nginx.ingress.kubernetes.io/force-ssl-redirect: "true" │ │ nginx.ingress.kubernetes.io/proxy-body-size: 30m │ │ creationTimestamp: "2022-08-22T08:41:14Z" │ │ generation: 4 │ │ labels: │ │ app.kubernetes.io/instance: eolo-webapp │ │ app.kubernetes.io/managed-by: Helm │ │ app.kubernetes.io/name: eolo-webapp │ │ app.kubernetes.io/version: 1.0.0 │ │ helm.sh/chart: eolo-webapp-1.0.0 │ │ name: eolo-webapp │ │ namespace: synbee-dev │ │ resourceVersion: "150872" │ │ uid: xxxxxxxxxxxxxxxxxxxxxxxxxxxx │ │ spec: │ │ ingressClassName: nginx │ │ rules: │ │ - host: app.dev.synbee.it │ │ http: │ │ paths: │ │ - backend: │ │ service: │ │ name: eolo-webapp │ │ port: │ │ number: 80 │ │ path: / │ │ pathType: Prefix │ │ tls: │ │ - hosts: │ │ - app.dev.synbee.it │ │ secretName: tls-secret │ │ status: │ │ loadBalancer: │ │ ingress: │ │ - ip: XXXXXXXXX │ │

if I use B2k with isolation , B2k create the correspective ingress, (eg xxxx-xxxx.app.dev.synbee.it)

when I surf it, it must redirect it to my localhost in order to debug , but also when the service is stopped ( we start manually it) the service continue to work, becouse the redirection goes on the main service running In that scenario, we cannot debug nothing

Ingress as you see have the 80 port defined and I cannot understand why it telling us to not read the ingress port

regards

letmagnau commented 2 years ago

Hi all we are investigating deeper this explainable behavior, We would add this information at discussion, hoping that could be useful we are using Linux 5.18.16-1-MANJARO #1 SMP PREEMPT_DYNAMIC some month ago, we encountered a cli connection problem and we fix ( with a lucky way) installing an old version of libicu --> libicu50 ( the latest lib is libicu71) this seems to work , but we are starting to believe that are the cause of problem... now we unistalled libicu50 and come back to latest and updated lib and connecting problems are came back

in fact we have problems to start EndpointManager and from log file it seems related to this bin mindaro.mindaro/file-downloader-downloads/binaries/EndpointManager/EndpointManager with this error :

bridge-.zip r EndpointManager is not running: 'Cannot assign requested address /home/xxxxx/.bridge/EndpointManager/EndpointManagerSocket'

from that snippet seems to be problem exactly when B2k starting to create prot forwarding and that point is the exact point that this strange behaivor happen

022-08-23T13:36:10.1849827Z | MindaroCli | TRACE | Event: Command.Start {"properties":{"arguments":"connect --service eolo-webapp --env /tmp/tmp-54549exxgswpepcp6.env --script /tmp/tmp-54549exxgswpepcp6.env.cmd --control-port 55338 --ppid 54427 --namespace synbee-dev --elevation-requests [{\"requesttype\":\"edithostsfile\"}] --routing piero-susca --local-port 80","isRoutingEnabled":"true"},"metrics":null}\nOperation context: {"clientRequestId":null,"correlationId":"b163e3cd-0bcb-44f9-aea3-b554cba2a8b61661261678759:932cc4d379a5","requestId":null,"userSubscriptionId":null,"startTime":"2022-08-23T13:36:09.9524256Z","userAgent":"VSCode/1.0.120220817","requestHttpMethod":null,"requestUri":null,"version":"1.0.20220816.2","requestHeaders":{},"loggingProperties":{"applicationName":"MindaroCli","deviceOperatingSystem":"Linux 5.18.16-1-MANJARO #1 SMP PREEMPT_DYNAMIC Wed Aug 3 11:18:52 UTC 2022","framework":".NET Core 3.1.9","macAddressHash":"0000000000000000000000000000000000000000000000000000000000000000","processId":55941,"targetEnvironment":"Production","commandId":"932cc4d379a5"}} 2022-08-23T13:36:10.3000185Z | MindaroCli | TRACE | Running Microsoft.BridgeToKubernetes.Exe.Commands.Connect.ConnectCommand...\nOperation context: {"clientRequestId":null,"correlationId":"b163e3cd-0bcb-44f9-aea3-b554cba2a8b61661261678759:932cc4d379a5","requestId":null,"userSubscriptionId":null,"startTime":"2022-08-23T13:36:09.9524256Z","userAgent":"VSCode/1.0.120220817","requestHttpMethod":null,"requestUri":null,"version":"1.0.20220816.2","requestHeaders":{},"loggingProperties":{"applicationName":"MindaroCli","deviceOperatingSystem":"Linux 5.18.16-1-MANJARO #1 SMP PREEMPT_DYNAMIC Wed Aug 3 11:18:52 UTC 2022","framework":".NET Core 3.1.9","macAddressHash":"0000000000000000000000000000000000000000000000000000000000000000","processId":55941,"targetEnvironment":"Production","commandId":"932cc4d379a5","targetServiceName":"eolo-webapp","isRoutingEnabled":true}} 2022-08-23T13:36:12.9807563Z | MindaroCli | TRACE | Remoting started listening on 55338 2022-08-23T13:36:53.7237131Z | MindaroCli | ERROR | Dependency: Service Run - Port Forward {"target":null,"success":false,"duration":null,"properties":{"requestId":"null","clientRequestId":"null","correlationRequestId":"null"}} 2022-08-23T13:36:53.7957040Z | MindaroCli | ERROR | An unexpected error occurred: 'Failed to launch EndpointManager.'\n 2022-08-23T13:36:53.7967110Z | MindaroCli | ERROR | To see our active issues or file a bug report, please visit https://aka.ms/bridge-to-k8s-report.\n 2022-08-23T13:36:53.7983729Z | MindaroCli | ERROR | For diagnostic information, see logs at '/tmp/Bridge To Kubernetes'.\n 2022-08-23T13:36:53.8119388Z | MindaroCli | ERROR | Logging handled exception: System.InvalidOperationException: {"ClassName":"System.InvalidOperationException","Message":"Failed to launch EndpointManager.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at Microsoft.BridgeToKubernetes.Library.EndpointManagement.EndpointManagementClient.EnsureEndpointManagerRunningAsync(CancellationToken cancellationToken)\n at Microsoft.BridgeToKubernetes.Library.EndpointManagement.EndpointManagementClient.InvokeEndpointManagerAsync[RequestType,ResponseType](RequestType request, CancellationToken cancellationToken, Boolean ensureEndpointManagerRunning)\n at Microsoft.BridgeToKubernetes.Library.EndpointManagement.EndpointManagementClient.StartEndpointManagerAsync(CancellationToken cancellationToken)\n at Microsoft.BridgeToKubernetes.Library.ManagementClients.ConnectManagementClient.<>cDisplayClass20_0.<b0>d.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at Microsoft.BridgeToKubernetes.Library.ManagementClients.ManagementClientExceptionStrategy.<>cDisplayClass3_0.<b0>d.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at Microsoft.BridgeToKubernetes.Library.ManagementClients.ManagementClientExceptionStrategy.RunWithHandlingAsync[T](Func`1 func, FailureConfig failureConfig)\n at Microsoft.BridgeToKubernetes.Exe.Commands.Connect.ConnectCommand.ExecuteAsync()\n at Microsoft.BridgeToKubernetes.Exe.CliApp.RunCommandAsync(String[] args, CancellationToken cancellationToken)\n at Microsoft.BridgeToKubernetes.Exe.CliApp.ExecuteAsync(String[] args, CancellationToken cancellationToken)","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233079,"Source":"Microsoft.BridgeToKubernetes.Library","WatsonBuckets":null} 2022-08-23T13:36:53.8158043Z | MindaroCli | TRACE | Event: Command.End {"properties":{"arguments":"connect --service eolo-webapp --env /tmp/tmp-54549exxgswpepcp6.env --script /tmp/tmp-54549exxgswpepcp6.env.cmd --control-port 55338 --ppid 54427 --namespace synbee-dev --elevation-requests [{\"requesttype\":\"edithostsfile\"}] --routing piero-susca --local-port 80","result":"Failed","failureReason":"Failed to launch EndpointManager."},"metrics":{"duration":43774.0}

Why we cannot use B2k without install this workaround ? I think that is we are able to understand this, the beahvior can be fix itself

regards

elenavillamil commented 2 years ago

We are going to try and create a cluster/app with nginx and try to repo your issue. The EndpointManager issue you share now is completely different, EndpointManager is needed regardless of isolation or no isolation mode. Could you share the logs from the library and endpointmanger files? they should be in the same folder as the cli logs. Also logs in tmp/Bridge-To-Kubernetes? Thank you :)

letmagnau commented 2 years ago

you can download fro the link attached

https://github.com/Azure/Bridge-To-Kubernetes/files/9403152/bridge-.zip

letmagnau commented 2 years ago

there's no endpointmanager log where i can find it?

letmagnau commented 2 years ago

I confirm that there's something strange

In order to connecting EndpointManager , I need to install mandatory libicu50 / libicu53 , only with icu package latest not work Also if EndPoint Manager start and I'm able to connect to B2K , if I debug in isolation service A (eolo-webapp) , and on my loclahost the service is DOWN, it will be redirected to the service main in cluster , and that is evidence in routing manager log here:

This means that I cannot debug anymore, and that is very blocking for us and for our subscription Azure

I've open a support ticket on it 2208220050000680, but no asnwers are given until now

2022-08-24T08:12:05.7873778Z | Library | TRACE | Connecting to EndpointManager 2022-08-24T08:12:05.7912536Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40 2022-08-24T08:12:05.7916966Z | Library | TRACE | Sending request: '{"apiname":"Ping","correlationId":"5cdcaf2f-e70f-4744-99a1-57f9af3a31141661328562865:59f0e79c2356:c48a6074ccfc:18a03849d2ce"}' 2022-08-24T08:12:05.7920829Z | Library | TRACE | 130 bytes were sent. 2022-08-24T08:12:05.7946701Z | Library | TRACE | Received request segment: '{"isSuccess":true,"errorMessage":null,"errorType":null}' of size 60 2022-08-24T08:12:05.7950827Z | Library | TRACE | Received response: '{"isSuccess":true,"errorMessage":null,"errorType":null}' 2022-08-24T08:12:25.3668292Z | Library | TRACE | Port forward piero-eolo-webapp-54785fb4cc-trgpf 42801:50051 41534 : Remote stream finished. Closed 2022-08-24T08:12:25.3686125Z | Library | TRACE | Port forward piero-eolo-webapp-54785fb4cc-trgpf 42801:50051 41534 : RemoteConnection closed. 2022-08-24T08:12:35.7843675Z | Library | TRACE | Connecting to EndpointManager 2022-08-24T08:12:35.7882221Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40 2022-08-24T08:12:35.7886668Z | Library | TRACE | Sending request: '{"apiname":"Ping","correlationId":"5cdcaf2f-e70f-4744-99a1-57f9af3a31141661328562865:59f0e79c2356:c48a6074ccfc:18a03849d2ce"}' 2022-08-24T08:12:35.7890747Z | Library | TRACE | 130 bytes were sent. 2022-08-24T08:12:35.8054605Z | Library | TRACE | Received request segment: '{"isSuccess":true,"errorMessage":null,"errorType":null}' of size 60 2022-08-24T08:12:35.8060031Z | Library | TRACE | Received response: '{"isSuccess":true,"errorMessage":null,"errorType":null}' 2022-08-24T08:12:36.7454467Z | Library | TRACE | Dependency: Kubernetes {"target":"WatchV1PodAsync","success":true,"duration":null,"properties":{}} 2022-08-24T08:12:38.3517262Z | Library | TRACE | Port forward piero-eolo-webapp-54785fb4cc-trgpf 42801:50051 34938 : Remote stream finished. Closed 2022-08-24T08:12:38.3517282Z | Library | TRACE | Port forward piero-eolo-webapp-54785fb4cc-trgpf 42801:50051 34938 : RemoteConnection closed. 2022-08-24T08:12:39.3524424Z | Library | TRACE | Port forward piero-eolo-webapp-54785fb4cc-trgpf 42801:50051 34954 : RemoteConnection closed. 2022-08-24T08:12:39.3524698Z | Library | TRACE | Port forward piero-eolo-webapp-54785fb4cc-trgpf 42801:50051 34954 : Remote stream finished. Closed 2022-08-24T08:13:05.7839180Z | Library | TRACE | Connecting to EndpointManager 2022-08-24T08:13:05.7869499Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40 2022-08-24T08:13:05.7873642Z | Library | TRACE | Sending request: '{"apiname":"Ping","correlationId":"5cdcaf2f-e70f-4744-99a1-57f9af3a31141661328562865:59f0e79c2356:c48a6074ccfc:18a03849d2ce"}' 2022-08-24T08:13:05.7876649Z | Library | TRACE | 130 bytes were sent. 2022-08-24T08:13:05.7908645Z | Library | TRACE | Received request segment: '{"isSuccess":true,"errorMessage":null,"errorType":null}' of size 60 2022-08-24T08:13:05.7912425Z | Library | TRACE | Received response: '{"isSuccess":true,"errorMessage":null,"errorType":null}' 2022-08-24T08:13:35.7850680Z | Library | TRACE | Connecting to EndpointManager 2022-08-24T08:13:35.7880370Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40 2022-08-24T08:13:35.7884009Z | Library | TRACE | Sending request: '{"apiname":"Ping","correlationId":"5cdcaf2f-e70f-4744-99a1-57f9af3a31141661328562865:59f0e79c2356:c48a6074ccfc:18a03849d2ce"}' 2022-08-24T08:13:35.7887328Z | Library | TRACE | 130 bytes were sent. 2022-08-24T08:13:35.7908654Z | Library | TRACE | Received request segment: '{"isSuccess":true,"errorMessage":null,"errorType":null}' of size 60 2022-08-24T08:13:35.7911962Z | Library | TRACE | Received response: '{"isSuccess":true,"errorMessage":null,"errorType":null}' 2022-08-24T08:13:36.7732087Z | Library | TRACE | Dependency: Kubernetes {"target":"WatchV1PodAsync","success":true,"duration":null,"properties":{}} 2022-08-24T08:14:05.7844114Z | Library | TRACE | Connecting to EndpointManager 2022-08-24T08:14:05.7857286Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40 2022-08-24T08:14:05.7859298Z | Library | TRACE | Sending request: '{"apiname":"Ping","correlationId":"5cdcaf2f-e70f-4744-99a1-57f9af3a31141661328562865:59f0e79c2356:c48a6074ccfc:18a03849d2ce"}' 2022-08-24T08:14:05.7860869Z | Library | TRACE | 130 bytes were sent. 2022-08-24T08:14:05.7903629Z | Library | TRACE | Received request segment: '{"isSuccess":true,"errorMessage":null,"errorType":null}' of size 60 2022-08-24T08:14:05.7905604Z | Library | TRACE | Received response: '{"isSuccess":true,"errorMessage":null,"errorType":null}' 2022-08-24T08:14:35.7850501Z | Library | TRACE | Connecting to EndpointManager 2022-08-24T08:14:35.7885464Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40 2022-08-24T08:14:35.7889289Z | Library | TRACE | Sending request: '{"apiname":"Ping","correlationId":"5cdcaf2f-e70f-4744-99a1-57f9af3a31141661328562865:59f0e79c2356:c48a6074ccfc:18a03849d2ce"}' 2022-08-24T08:14:35.7893087Z | Library | TRACE | 130 bytes were sent. 2022-08-24T08:14:35.7916153Z | Library | TRACE | Received request segment: '{"isSuccess":true,"errorMessage":null,"errorType":null}' of size 60 2022-08-24T08:14:35.7920164Z | Library | TRACE | Received response: '{"isSuccess":true,"errorMessage":null,"errorType":null}' 2022-08-24T08:14:36.8165644Z | Library | TRACE | Dependency: Kubernetes {"target":"WatchV1PodAsync","success":true,"duration":null,"properties":{}}

letmagnau commented 2 years ago

Another thing...

If there's a redirection on my localhost. why I'm finding the debugged service in my hosts file? why I can't find a redirection on my local? regards

letmagnau commented 2 years ago

Update:

I fork the project Mindaro and on samples BikeSharing i cahnged the traefik ingress to nginx and v1 manfest ( as our env) and behavior is the same also with your code

you can check here : https://github.com/letmagnau/mindaro

hsubramanianaks commented 2 years ago

@letmagnau we are looking into your forked repo, but FYI we have deprecated bike sharing app and we use todo-app only for our examples or bug replications. we are working on making the todo-app with nginx ingress similar to your set up and see if we are able to replicate the issue. Please give us some time we will get back to you. Thank you.

letmagnau commented 2 years ago

hi @hsubramanianaks

ok for deprecation, but that sample use ingress, todoapp not .

hsubramanianaks commented 2 years ago

@letmagnau we will look into your forked branch to replicate the issue. Please give us some time, we will have an update on this. Thanks.

hsubramanianaks commented 2 years ago

@letmagnau Can you try with your ingress.yaml giving backend.service.port.name which is mutually exclusive to port name to the service you are using. ex: ingress.yaml

image

ex: service

image

to be clear the service.port.name for your service needs to match with paths.path.http.backend.service.port.name, since in your example port.name was not present , it was erroring out RoutingManager | ERROR | Service port 'null' from ingress 'XXXXXXX' does not match any port on the service 'XXXXXXX'. │ Please try and let me know. thank you.

letmagnau commented 2 years ago

Hi @hsubramanianaks I never thought it could be that the fix... but at this moment I'm trying everything

Unfortunately it not change nothing in terms of this bug , it remains intact About the log, as told before, routing manager log are not always created ( and I don't know why) but on bridge-libray log I found

"Port forward piero-bikesharingweb-75498ddb8f-59nv8 33327:50051 33982 : Run send loop" this is a TRACE LOG but can be suspicious ... can you value it?

Obliviously I've tried thsi fix also on BikeSharing project

PS_ can you confirm that I found the same behavior?

2022-08-25T07:53:26.2908975Z | Library | TRACE | Creating web socket for piero-bikesharingweb-75498ddb8f-59nv8 50051 2022-08-25T07:53:26.3680034Z | Library | TRACE | Dependency: Kubernetes {"target":"WatchV1PodAsync","success":true,"duration":null,"properties":{}} 2022-08-25T07:53:26.5666729Z | Library | TRACE | Dependency: Kubernetes {"target":"WebSocketPodPortForwardAsync","success":true,"duration":null,"properties":{}} 2022-08-25T07:53:26.5669751Z | Library | TRACE | Web socket for piero-bikesharingweb-75498ddb8f-59nv8 50051 created. 2022-08-25T07:53:26.5677076Z | Library | TRACE | Port forward piero-bikesharingweb-75498ddb8f-59nv8 33327:50051 33972 : Run receive loop 2022-08-25T07:53:26.6239687Z | Library | TRACE | Accept 33327 to 50051 2022-08-25T07:53:26.6246656Z | Library | TRACE | Port forward piero-bikesharingweb-75498ddb8f-59nv8 33327:50051 33982 : Run send loop 2022-08-25T07:53:26.6251127Z | Library | TRACE | Creating web socket for piero-bikesharingweb-75498ddb8f-59nv8 50051 2022-08-25T07:53:26.9047228Z | Library | TRACE | Dependency: Kubernetes {"target":"WebSocketPodPortForwardAsync","success":true,"duration":null,"properties":{}} 2022-08-25T07:53:26.9058975Z | Library | TRACE | Web socket for piero-bikesharingweb-75498ddb8f-59nv8 50051 created. 2022-08-25T07:53:26.9120899Z | Library | TRACE | Port forward piero-bikesharingweb-75498ddb8f-59nv8 33327:50051 33982 : Run receive loop 2022-08-25T07:53:56.1808859Z | Library | TRACE | Connecting to EndpointManager 2022-08-25T07:53:56.1823083Z | Library | TRACE | Received request segment: 'EndpointManager accepted connection' of size 40

regard

hsubramanianaks commented 2 years ago

@letmagnau just to confirm are you getting 404 when accessing the isolated ingress ex: http://isolatedroute.bikesharingweb.app.dev.synbee.it/ ? when running B2K.

hsubramanianaks commented 2 years ago

@letmagnau Nginx controller logs shows this - seems like your ingress setup for bike share app is not correct. can you check ? W0825 14:38:27.308214 7 controller.go:1111] Service "ingress-nginx/gateway" does not have any active Endpoint. W0825 14:38:41.096955 7 controller.go:1111] Service "ingress-nginx/bikesharingweb" does not have any active Endpoint.

letmagnau commented 2 years ago

@hsubramanianaks absolutly not http://isolatedroute.bikesharingweb.app.dev.synbee.it/ remains up and running also when on localhost , bind service is stopped

How can I explain ????

We cannot debug anymore why redirection to localhost not happen. and this are causing us money and time loss

Our check is just that ! If I bind service A with B2k and leave that service STOPPED, I should expect that have error instead it ignore this scenario and behaves how no binding are running, going to the main service

What do you mean whit : "Nginx controller logs shows this - seems like your ingress setup for bike share app is not correct" ??

I've forked your project , can you confirm that it doesn't work for you either???

hsubramanianaks commented 2 years ago

@letmagnau I am not able to replicate this issue in my local. Please provide link to the recorded video and we will check. thank you.

letmagnau commented 2 years ago

hi @hsubramanianaks

we have uploaded the video

we want add that , as you told us, it continue to not work also if I add for all ingress the service port name "http"

but when Helm deploy it, that attribute is ignored, so I think that are out of standard v1 manifest and If for you is mandatory , that will be a problem

regard

qpetraroia commented 2 years ago

Hi @letmagnau,

We have acknowledged the issue and are looking into it. The team is still ramping up with the codebase, so it might take us a bit of time.

thanks, Quentin

hsubramanianaks commented 2 years ago

@letmagnau Thank you for your video. I was not able to reproduce the issue with the bike sharing application. When I connected to the cluster via bridge menu and debugged isolated route, I got 404 nginx error, bridge didn't redirect to the main/bikesharingweb service in cluster. Can you please share how did you set up your nginx ingress? Also, can you please update the issue description according to the bug template here - https://github.com/Azure/Bridge-To-Kubernetes/blob/main/.github/ISSUE_TEMPLATE/bug_report.md. Thank you again.

letmagnau commented 2 years ago

What?? you are using our cluster and our nginx configuration, how can you have 404 ?? as showed ( and we have replicate more and more time) it continue to works and redirect on main ws

we also are trying from mobile or another network as you, and behavior is the same

upload me a video about it

Ingress set up is on the fork code, updated. We have two ngix pod running on another ns that bind our alias to AKS dynamic ip

we continue to have on routing manager logs ERROR | Service port 'null' from ingress 'bikesharingweb' does not match any port on the service 'bikesharingweb'. ERROR | Service port 'null' from ingress 'gateway' does not match any port on the service 'gateway'.

also if I specified the backend service port name = http

I'm leaving running my bind service on : piero-3fdb.bikesharingweb.app.dev.synbee.it and obviously , local service running on 3000 port stopped

can you try?

letmagnau commented 2 years ago

to be complete , I'm atttaching the IngressClass YAML

apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: annotations: meta.helm.sh/release-name: nginx meta.helm.sh/release-namespace: ingress-nginx-ns creationTimestamp: "2022-08-22T13:43:20Z" generation: 1 labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.3.0 helm.sh/chart: ingress-nginx-4.2.1 name: nginx resourceVersion: "107612" uid: b6dbab9e-f288-48c2-9d78-96f87955abd2 spec: controller: k8s.io/ingress-nginx

letmagnau commented 2 years ago

update, today when connecting to b2k , we have an error like : Name: piero-bikesharingweb-c5b4fcfb-7qv8f-restore-278b2-qvqdk Namespace: mindaro Priority: 0 Node: aks-agentpool-21099443-vmss000001/10.224.0.5 Start Time: Tue, 30 Aug 2022 10:15:23 +0200 Labels: controller-uid=cd81cb80-966f-4444-b791-f993983a3e8d job-name=piero-bikesharingweb-c5b4fcfb-7qv8f-restore-278b2 mindaro.io/component=lpkrestorationjob mindaro.io/instance=278b20654c mindaro.io/version=0.1.1 Annotations: Status: Pending IP: 10.244.1.140 IPs: IP: 10.244.1.140 Controlled By: Job/piero-bikesharingweb-c5b4fcfb-7qv8f-restore-278b2 Containers: lpkrestorationjob: Container ID:
Image: bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1 Image ID:
Port: Host Port: State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Environment: NAMESPACE: mindaro (v1:metadata.namespace) INSTANCE_LABEL_VALUE: 278b20654c BRIDGE_ENVIRONMENT: Production BRIDGE_COLLECT_TELEMETRY: True BRIDGE_CORRELATION_ID: 152c4699-3499-4366-8c8c-16ec59ad438c1661847293368:41ac6ab5c99d:13f83c9c8cde Mounts: /etc/patchstate from patchstate (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpfl4 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: patchstate: Type: Secret (a volume populated by a Secret) SecretName: piero-bikesharingweb-c5b4fcfb-7qv8f-restore-278b2 Optional: false kube-api-access-xpfl4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Normal Scheduled 107s default-scheduler Successfully assigned mindaro/piero-bikesharingweb-c5b4fcfb-7qv8f-restore-278b2-qvqdk to aks-agentpool-21099443-vmss000001 Warning Failed 107s kubelet Failed to pull image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": rpc error: code = Unknown desc = failed to pull and unpack image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to resolve reference "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to do request: Head "https://bridgetokubernetes.azurecr.io/v2/lpkrestorationjob/manifests/0.1.1": dial tcp: lookup bridgetokubernetes.azurecr.io on [::1[]:53: read udp [::1[]:43663->[::1[]:53: read: connection refused Warning Failed 93s kubelet Failed to pull image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": rpc error: code = Unknown desc = failed to pull and unpack image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to resolve reference "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to do request: Head "https://bridgetokubernetes.azurecr.io/v2/lpkrestorationjob/manifests/0.1.1": dial tcp: lookup bridgetokubernetes.azurecr.io on [::1[]:53: read udp [::1[]:58102->[::1[]:53: read: connection refused Warning Failed 67s kubelet Failed to pull image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": rpc error: code = Unknown desc = failed to pull and unpack image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to resolve reference "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to do request: Head "https://bridgetokubernetes.azurecr.io/v2/lpkrestorationjob/manifests/0.1.1": dial tcp: lookup bridgetokubernetes.azurecr.io on [::1[]:53: read udp [::1[]:48491->[::1[]:53: read: connection refused Normal Pulling 18s (x4 over 107s) kubelet Pulling image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1" Warning Failed 18s (x4 over 107s) kubelet Error: ErrImagePull Warning Failed 18s kubelet Failed to pull image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": rpc error: code = Unknown desc = failed to pull and unpack image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to resolve reference "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1": failed to do request: Head "https://bridgetokubernetes.azurecr.io/v2/lpkrestorationjob/manifests/0.1.1": dial tcp: lookup bridgetokubernetes.azurecr.io on [::1[]:53: read udp [::1[]:55551->[::1[]:53: read: connection refused Normal BackOff 4s (x6 over 107s) kubelet Back-off pulling image "bridgetokubernetes.azurecr.io/lpkrestorationjob:0.1.1" Warning Failed 4s (x6 over 107s) kubelet Error: ImagePullBackOff

letmagnau commented 2 years ago

update: I now switch to use traefik in order to do a try, with the same ingress yaml ( without ingressclass) and the behavior is exactly the same so, I'm thinking B2K is not working with no projects using ingress defined have you a sample with it that is working for you??

to be completed:

qpetraroia commented 2 years ago

Hi @letmagnau,

We can't seem to replicate the issue, however we dug into the logs, and we found an error that we think may be causing the issue. We will be publishing an image for you to use. Could you please test it and see if this fixes the problem for you? The steps below are how to test it:

This is just for testing purposes. If it works for you, we will publish to this to main and release it for the general use.

letmagnau commented 2 years ago

@qpetraroia

without your image, I'm able to understand the problem, and I hope that fix will resolve as @hsubramanianaks said, seems to be mandatory for the ingress the yaml attribute paths.path.http.backend.service.port.name , and we have from k8 standard paths.path.http.backend.service.port.number

the first time I have added port name to our yaml but without leave number, and this is not managed by b2k in other word , seems to pick always the number also if you specify the name

We have tried to leave the number ( that is not a k8 best practice) and it seems to work correctly now so the point is : B2k make some checks about it?

thank you for your support

Except for the network issue for dns that are occurring, we should be ready again now

qpetraroia commented 2 years ago

Hi @letmagnau,

It seems your problem has gone away? Can you please share what cause the fix? We would love to know on our side as this helps us learn and take feedback.

Thanks!

letmagnau commented 2 years ago

Yes, as I told in the previous comment we changed the ingress from this: ... rules:

to this : .. rules:

In last, we only leave the name , eliding number . if we leave both , it not works

K8 standard include number attribute to yaml default , so I think that would be util that B2k can associate the port number to the string http ( port name) , becouse if you want to have a different port ( not 80 or 443) open for the ingress, you cant use B2k anymore

regards

qpetraroia commented 2 years ago

Thank you! We will be closing this issue.

cc @elenavillamil @hsubramanianaks