dotnet / aspnetcore

ASP.NET Core is a cross-platform .NET framework for building modern cloud-based web applications on Windows, Mac, or Linux.
https://asp.net
MIT License
35.48k stars 10.04k forks source link

ARC App Service cannot authenticate to AAD #42631

Open snapfisher opened 2 years ago

snapfisher commented 2 years ago

Is there an existing issue for this?

Describe the bug

I have a Blazor Server app which cannot authenticate to the Azure App Service running on ARC. The error is that the authentication redirect URL is http:// when it should be https://.

This application works perfectly correct when deploy to both an app service running windows or an app service running linux.

I originally submitted this to the identity-we folks, and the still open issue is here: https://github.com/AzureAD/microsoft-identity-web/issues/1792, but they do not believe the error is on their side.

This is a blocker when this goes to production. It's a blocker now for setting up customer facing demos.

Expected Behavior

Authentication under App Service ARC should work identically to deployment to an actual Linux App Server. I should be able to log in.

Steps To Reproduce

  1. Next->Next->Continue a blazor server app in visual studio and then wire it up for AAD authentication.
  2. Create an ARC App Service as described here: https://docs.microsoft.com/en-us/azure/app-service/quickstart-arc
  3. Using visual studio, publish the blazor app to the arc app service

You will not be able to complete the oauth authentication.

Exceptions (if any)

AADSTS50011: The redirect URI 'http://xxx.eastus.k4apps.io/signin-oidc' specified in the request does not match the redirect URIs configured for the application

.NET Version

6.0.301

Anything else?

Visual Studio: 17.2.5

PS C:\Users\pfisher> dotnet --info .NET SDK (reflecting any global.json): Version: 6.0.301 Commit: 43f9b18481

Runtime Environment: OS Name: Windows OS Version: 10.0.22000 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\6.0.301\

Host (useful for support): Version: 6.0.6 Commit: 7cca709db2

.NET SDKs installed: 3.1.420 [C:\Program Files\dotnet\sdk] 6.0.301 [C:\Program Files\dotnet\sdk]

.NET runtimes installed: Microsoft.AspNetCore.App 3.1.26 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 6.0.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 3.1.26 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.17 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 6.0.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.1.26 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 5.0.17 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 6.0.6 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]

blowdart commented 2 years ago

The Identity folks are probably right :)

Inside the cluster itself your apps are probably running on http only, with a proxy in front taking care of HTTPS termination, and the forwarding the requests to the app inside the cluster (if you can look at the logs on startup you'll see the first one or two log entries from app startup show you what the app is listening on

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: https://localhost:7035
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://localhost:5284

I'd guess you only see an http url.

What you need to do is add the ForwardedHeadersMiddleware which, assuming ARC passes the right headers, will override the actual protocol the app is running on, with one that the proxy is running on, which would be https.

There's some code samples in the docs to get you up and running. If Arc is coming through Azure Front Door these headers should be added for you, if it's something else (their agent directly?), please let us know and we'll go talk to them try to get to the bottom of it.

ghost commented 2 years ago

Hi @snapfisher. We have added the "Needs: Author Feedback" label to this issue, which indicates that we have an open question for you before we can take further action. This issue will be closed automatically in 7 days if we do not hear back from you by then - please feel free to re-open it if you come back to this issue after that time.

blowdart commented 2 years ago

Tagging in @apwestgarth for investigation :)

snapfisher commented 2 years ago

I've already tried that. If you look at the entire issue https://github.com/AzureAD/microsoft-identity-web/issues/1792, and I could not get the behavior to change. We even tried setting the ASPNETCORE_FORWARDEDHEADERS_ENABLED environment variable manually, even though the identity folks thought it should already be set. I could get no change in the behavior.

I did take your suggestion and go back to the logs for the last 72 hours. I'm not sure what is up with ARC, but I am not getting logs like that, only:

image

I don't know where 8080 is coming from. This essentially all my runs where I had logging enabled, and some of the runs were with the middleware fowarding on, and some off, and even though the logs are whacked, they seem consistent across the runs.

blowdart commented 2 years ago

Andy is taking a look at it

apwestgarth commented 2 years ago

@snapfisher as @blowdart mentioned, I'm currently investigating and will confer with engineering on Monday. In App Service on Arc we don't go through the same infrastructure as in App Service Linux as we are running using Kubernetes components. Requests are routed through an Envoy reverse proxy to the application pods and we terminate TLS at Envoy. Will report back once we have completed the investigation.

ahmelsayed commented 2 years ago

@blowdart is there a difference when running in a docker container?

I have

Program.cs

using System.Text;
using Microsoft.AspNetCore.HttpOverrides;

var builder = WebApplication.CreateBuilder(args);
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
    options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});
var app = builder.Build();

app.UseForwardedHeaders();

app.MapGet("/", async ctx =>
{
    var sb = new StringBuilder();
    sb.AppendLine($"scheme: {ctx.Request.Scheme}");
    sb.AppendLine($"Protocol: {ctx.Request.Protocol}");
    sb.AppendLine($"isHttps: {ctx.Request.IsHttps}");
    foreach (var h in ctx.Request.Headers)
    {
        sb.AppendLine($"{h.Key}: {h.Value}");
    }

    ctx.Response.ContentType = "plain/text";
    await ctx.Response.WriteAsync(sb.ToString());
});

app.Run();

if I try running it on my machine, i.e:

$ dotnet run DumpHeaders/DumpHeaders.csproj

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: https://localhost:7131
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://localhost:5131
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /tmp/scratch/DumpHeaders/DumpHeaders/
$ curl http://localhost:5131 -H "x-forwarded-proto: https"

scheme: https
Protocol: HTTP/1.1
isHttps: True
Accept: */*
Host: localhost:5131
User-Agent: curl/7.83.1
X-Original-For: 127.0.0.1:53716
X-Original-Proto: http

if I build a docker container, and run it

Dockerfile ```dockerfile FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build WORKDIR /src COPY ["DumpHeaders/DumpHeaders.csproj", "DumpHeaders/"] RUN dotnet restore "DumpHeaders/DumpHeaders.csproj" COPY . . WORKDIR "/src/DumpHeaders" RUN dotnet build "DumpHeaders.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "DumpHeaders.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "DumpHeaders.dll"] ```
$ docker run --rm -it -p 8090:80 dumpheaders:1

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app/

but the same curl command now outputs


$ curl http://localhost:8090 -H "x-forwarded-proto: https"

scheme: http
Protocol: HTTP/1.1
isHttps: False
Accept: */*
Host: localhost:8090
User-Agent: curl/7.83.1
x-forwarded-proto: https
blowdart commented 2 years ago

It falls to configuration of whatever the proxy is in front of the container/service. Azure App Services put the header in by default, as does, I believe nginx, and so does Yarp.

ahmelsayed commented 2 years ago

There is no proxy in the example above. I'm running the app on my machine, once directly using dotnet run and another in a docker image and I see the behavior difference showed above

I can see Arc also setting the header correctly, but I'm trying to reproduce the issue using the same headers passed from the proxy in Arc.

ahmelsayed commented 2 years ago

@blowdart it seems to have to do with context.Connection.RemoteIpAddress and x-forwarded-for

If I remove ForwardedHeaders.XForwardedFor from the options.ForwardedHeaders flag, it works as expected. When that flag is there the middleware throws an error like:

  Unknown proxy: [::ffff:10.244.14.91]:51506  

The request from the proxy (which is running at the above IP) look like:

context.Connection.RemoteIpAddress: ::ffff:10.244.14.91
context.Connection.RemotePort: 54208

Headers:
x-forwarded-for: 20.10.19.09 // edge proxy public IP
x-forwarded-proto: https
blowdart commented 2 years ago

Oh interesting, IPv6

@mkArtakMSFT who owns the header overrides middleware right now?

wtgodbe commented 2 years ago

@adityamandaleeka seems like this is an issue in the forwarding middleware, so I guess our team owns it...

wtgodbe commented 2 years ago

Assigning you so you can assign someone 😆

ghost commented 2 years ago

Thanks for contacting us. We're moving this issue to the .NET 8 Planning milestone for future evaluation / consideration. Because it's not immediately obvious that this is a bug in our framework, we would like to keep this around to collect more feedback, which can later help us determine the impact of it. We will re-evaluate this issue, during our next planning meeting(s). If we later determine, that the issue has no community involvement, or it's very rare and low-impact issue, we will close it - so that the team can focus on more important and high impact issues. To learn more about what to expect next and how this issue will be handled you can read more about our triage process here.

snapfisher commented 2 years ago

I don't quite understand. As of now, if you go to create a Blazor app using AAD on app service for ARC, you cannot log in. So this is fatal. Are you moving all of ARC app service to .Net 8? What is the workaround for this.

sebastienros commented 2 years ago

Base on @ahmelsayed 's comments, the problem could occur if none of the properties ForwardedHeadersOptions.KnownNetworks or ForwardedHeadersOptions.KnownProxies contain the RemoteIpAndPort value, or if either of these properties is not empty.

Could you try both of these workarounds?

Source for reference: https://github.com/dotnet/aspnetcore/blob/main/src/Middleware/HttpOverrides/src/ForwardedHeadersMiddleware.cs#L250

sebastienros commented 2 years ago

Until then I will also try the Docker repro and apply my suggestions

snapfisher commented 2 years ago

I am unclear what values I should use for these. For IP would it be the IP address of the deployed site, and then which proxies? If you could give me an example if I were running ARC AppService on Azure, that would be excellent.

ahmelsayed commented 2 years ago

@snapfisher per @sebastienros comment above, you can either do

builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
    options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;

+   options.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("10.0.0.0"), 8));
});

Assuming your ~kubernetes cluster network~ --pod-network-cidr is some 10.0.0.0/8 variety. I think the default 10.244.0.0/16

or clear both KnownNetworks and KnownProxies

builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
    options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;

+   options.KnownNetworks.Clear();
+   options.KnownProxies.Clear();
});
snapfisher commented 2 years ago

OK, I'll take a look, however...I can tell you that the second example will not fix the problem, as that is what I was trying to do (verbatim!) when I reported this. I will try the first.

snapfisher commented 2 years ago

@ahmelsayed This does not fix the problem. You were right about 10.0.0.0/8, but I have the exact same issue using this:

            services.Configure<ForwardedHeadersOptions>(options =>
            {
                options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
                options.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("10.0.0.0"), 8));
            });
mitchdenny commented 1 year ago

@sebastienros did you ever get to the bottom of this or were you able to use Docker to repro it?

mattpratt-fs commented 4 weeks ago

I have a .net 8 blazor app with auth0 auth deployed in AWS behind an ALB. Auth0 sets up Cookies + oidc. ALB terminates https then forwards over http to my blazor kestrel in fargate.

I have this same issue.

I did spend some time fiddling with ForwardedHeaderOptions and KnownNetworks and IPv6 notation.

Basically the ForwardedHeaders works everywhere EXCEPT for the initial authentication flow.

I do believe the issue is that CookieAuthenticationHandler is being run before The ForwardedHeadersMiddleware gets a chance to fix up the headers.

During initial login the CookieAuthHandler notices no valid cookie and so redirects to CookieAuthenticationOptions.LoginPath (in my case just the default /Account/Login) but because the ForwardedHeadersMiddleware has not run the 302 it responds with has a http:// URL.

OIDC middleware picks that http:// URL up and sends it to the auth provider which fails the request.

For anyone who googles and stumbles across this issue, I want to share my work around, which is just to hook the OnRediectToLogin and apply the X-Forwarded-Proto header if it exists:

 builder.Services.PostConfigure<CookieAuthenticationOptions>(CookieAuthenticationDefaults.AuthenticationScheme,
  cookieOptions =>
  {
    cookieOptions.Events.OnRedirectToLogin = ProxyEvent.Proxy(LoginRedirect, cookieOptions.Events.OnRedirectToLogin);
  });

async Task LoginRedirect(RedirectContext<CookieAuthenticationOptions> context)
{
  context.Request.Headers.TryGetValue("X-Forwarded-Proto", out var header);
  if (context.RedirectUri.StartsWith("http://") && header == "https")
  {
    context.RedirectUri = "https://" + context.RedirectUri.Substring(7);
  }
  await Task.CompletedTask;
}

public static Func<T, Task> Proxy<T>(Func<T, Task> firstHandler, Func<T, Task> originalHandler)
{
    return async (context) =>
    {
        if (firstHandler != null)
        {
            await firstHandler(context);
        }
        if (originalHandler != null)
        {
            await originalHandler(context);
        }
    };
}