microsoft / mindaro

Bridge to Kubernetes - for Visual Studio and Visual Studio Code
MIT License
307 stars 106 forks source link

Using the same domain in multiple ingress definitions breaks isolation mode #77

Open greggbjensen opened 3 years ago

greggbjensen commented 3 years ago

Bug

When you define two different ingresses that use the same domain but with different paths, isolation mode fails to start.

image

Watcher log

Only unique values for domains are permitted. Duplicate entry of domain .mysub.testuri.org

To Reproduce

  1. Create an ingress YAML file for a domain with a path that points to a service
  2. Create a second ingress YAML file with the same domain with a different path that points to a different service
  3. Try and start Bridge to Kubernetes with isolation mode
  4. Observe the error above

Expected Behavior

As with DevSpaces, support the ability to add multiple ingress definitions with different paths but the same domain.

Environment Details

Client used: Visual Studio Client's version: 2019 Operating System: Windows 10

Additional context

We are currently using DevSpaces and this does work there. We are looking to migrate to Bridge to Kubernetes, but are blocked by this issue. The current suggested solution is to put all of the routes into a single large ingress file. This is problematic for independently deployed helm charts. In our case we have an API gateway with microservices as paths under the same domain. Depending on the deployment, different charts or microservices will be deployed, which adds certain paths. Example:

Domain Path Microservice Helm Chart Ingress
api.testuri.org /identity Identity org-api-identity org-api-identity
api.testuri.org /catalog Catalog org-api-catalog org-api-catalog

It would be difficult, and an anti-pattern to merge all of these microservices into a single helm chart ingress.yaml. Preferably each helm chart adds its own route, but to the same API gateway domain.

pragyamehta commented 3 years ago

Hi @greggbjensen Thanks for reaching out with this issue! It would be great if you could attach the logs from the routing manager running in your namespace. You can retrieve the logs by running the below commands :

Retrieve the routing manager pod name kubectl -n {namespace} get pods Retrieve logs for the routing manager pod kubectl -n {namespace} logs {routing manager pod name from step 1} > routing-manager-logs.txt

Kindly send the routing-manager-logs.txt to us and we will investigate and get back to you!

Please feel free to send the logs to bridgetokubernetes@microsoft.com

greggbjensen commented 3 years ago

It will take me a while to set up back up a cluster and scenario for this. Is there anything else you would like before I do?

amsoedal commented 3 years ago

Hi @greggbjensen, we were able to repro your issue and my colleague is working on a fix. If you want to test it out, you can set the routing manager environment variable to point to her custom image:

$env:BRIDGE_ROUTINGMANAGERIMAGENAME=murph15/routingmanager:multipleing

Let us know if this works for you! Or, if you prefer to wait, the fix should go out when we release (likely next week).

greggbjensen commented 3 years ago

That's great news! Thanks.