microsoft / mindaro

Bridge to Kubernetes - for Visual Studio and Visual Studio Code
MIT License
307 stars 106 forks source link

In Isolation mode (in PR scenario) regular traffics get to see the isolated changes #110

Closed MoimHossain closed 3 years ago

MoimHossain commented 3 years ago

I have the Pull Request Isolation mode on AKS. It's based on nginx-ingress (instead of Traefik as the example showed), and the Pull Requests are on Azure DevOps. Also, not using helm but regular manifest files. What I observe is that routing manager works nice - creates all the ingress clones and envoy proxies, as expected. However, when I visit the main URI (not the URL with isolated sub-domain in it) some of these requests seem to be served via the container that came from PR. (it feels to me that it's doing a round robin to the isolated containers). I have simplified the setup so that I can explain it here clearly. I have 3 .net core API app (calling them frontend, api and backend - as names suggest, frontend calls api and api calls backend.)

Backend code (the crucial bits) looks like below:

 app.UseEndpoints(endpoints =>
  {
      endpoints.MapGet("/", async context =>
      {
          await context.Response.WriteAsync("Hello World from Backend!");
      });
  });

Then goes the API:

endpoints.MapGet("/", async context =>
{
       using var client = new System.Net.Http.HttpClient();
        var request = new System.Net.Http.HttpRequestMessage { RequestUri = new Uri("http://backend/") };
        var header = "kubernetes-route-as";
        if (context.Request.Headers.ContainsKey(header))
        {
            request.Headers.Add(header, context.Request.Headers[header] as IEnumerable<string>);
        }
        var response = await client.SendAsync(request);
        await context.Response.WriteAsync($"API message: {await response.Content.ReadAsStringAsync()}");
});

The Frontend:

endpoints.MapGet("/", async context =>
{
        using var client = new System.Net.Http.HttpClient();
        var request = new System.Net.Http.HttpRequestMessage { RequestUri = new Uri("http://api/") };
        var header = "kubernetes-route-as";
        if (context.Request.Headers.ContainsKey(header))
        {
            request.Headers.Add(header, context.Request.Headers[header] as IEnumerable<string>);
        }
        var response = await client.SendAsync(request);
        await context.Response.WriteAsync($"Front End --> {await response.Content.ReadAsStringAsync()}");
});

Finally deploying them all with this _manifest__ (kubectl apply -f https://gist.githubusercontent.com/MoimHossain/c2b3b09797a7b80e80724bb4d8a24f69/raw/350509896130cf400a0495ae20dffa8426c29987/manifest-b2k-test-app.yaml)

To Reproduce

I have a PR workflow (in Azure DevOps) for the API application (the microservice in the middle) - and the intention is to take the branch name and use that as subdomain to create a deployment in isolation mode - so it can be used to test the API changes before they merged into main branch. The PR pipeline basically deploys the following manifest. The pipeline is basically just doing this: kubectl apply -f https://gist.githubusercontent.com/MoimHossain/152404fe33e115e7eb07d48269ac9172/raw/7143ed1148705e7318babd36850c4c0fd7152f91/API-PR.yaml.

For example, if the branch name is feature1 - after the above, I can reach to feature1.somedomain.com and it indeed shows the API changes that is in PR. But if I go to the main URI (http://somedomain.com) then its round robins and sometimes I get to see the changed API which be isolated.

Expected behavior

Traffic coming via the main domain URI should never reach to the POD that are deployed in isolations and should only be reachable when URI with that subdomain is used.

Can you please let me know if I am doing something wrong in my configuration or it's actually an issue?

MoimHossain commented 3 years ago

I have resolved it myself. I am closing this issue.

amsoedal commented 3 years ago

Hi @MoimHossain glad you were able to get it resolved. Did changing the configuration help?

MoimHossain commented 3 years ago

Hi @MoimHossain glad you were able to get it resolved. Did changing the configuration help?

Indeed. While cloning the middle-ware service (and its pod) I had to give the service and pod a new label for pull request flow -that resolved it. My bad, I overlooked this in first try. Thank you. :-)