Open amlwwalker opened 8 years ago
One "simple" way to do this is to run two instances of oauth2_proxy, which use the same cookie-secret and both set cookie-domain to .company.com
. Then, after logging into one, you won't have to log into the other, the oauth2_proxy cookie (which applies to all subdomains) will let you through.
OK and just so I am clear, if I had 10 domains I wanted to protect I would need 10 instances of oauth2_proxy?
Is there a "complicated" way?
I was wondering whether oauth2_proxy can look at the domain that is coming in and make a "smart" decision on which upstream to forward to based on that? I presume the oauth (i.e google) only needs one redirect-url but oauth2_proxy would need one redirect url for each sub domain app, and therefore you would need multiple google apps and multiple access/secret keys etc etc etc?
I suggest something like: nginx port 80/443 -> oauth2_proxy port 4180 -> nginx port 5180 -> various upstreams
but see also: #143 #12 ... and also auth_request, if you can figure out how to use it, also can be used to leave all the multi-subdomain logic in nginx
@ploxiln does oauth2_proxy pass the domain to nginx in your example there? For instance if nginx is listening on port 5180 for third_subdomain.website.com will that be passed on by oauth2_proxy? If so thats great!
By default, yes it does. (Or, the --pass-host-header
option can be set to false if upstream needs a different host header)
Hi @ploxiln Ive made reasonable progress with this, and it might help others, the references to issues #12 and #143 were very helpful. I have ended up with the following Nginx configuration to use oauth2_proxy for two applications, both of which I want to protect. I am not sure what upstream should be set in the oauth2_proxy config as the upstream comes from nginx? I have detailed issue below.
upstream dashboard.example.com {
# dashboard
server 172.17.0.6:9000;
}
server {
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_name dashboard.example.com;
proxy_buffering off;
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
location = /oauth2/start {
proxy_pass http://172.17.0.4:4180/oauth2/start?rd=%2F$server_name$arg_rd;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
location / {
proxy_pass http://172.17.0.4:4180/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
upstream internal.example.com {
# wiki
server 172.17.0.5:5000;
}
server {
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_name internal.example.com;
proxy_buffering off;
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
location = /oauth2/start {
proxy_pass http://172.17.0.4:4180/oauth2/start?rd=%2F$server_name$arg_rd;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
location / {
proxy_pass http://172.17.0.4:4180/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
server {
server_name auth.example.com;
location = /oauth2/callback {
proxy_pass http://172.17.0.4:4180;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
location = /oauth2/start {
proxy_pass http://172.17.0.4:4180;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
location "~^/(?<target_host>[^/]+).example.com/(?<remaining_uri>.*)$" {
rewrite ^ $scheme://$target_host.example.com/$remaining_uri;
}
location / {
deny all;
}
}
It all works until the very last step, when the user is logged in, it always sends them to 172.17.0.5:5000
which is the upstream for internal.example.com even if the user goes to dashboard.example.com. For some reason they don't get passed to 172.17.0.6:9000
ever. When I check the headers in Chrome for the request, or the nginx logs I can see the request is to dashboard, yet I get 172.17.0.5:5000
returned. I think this is because the upstream in the oauth2 config is set to 172.17.0.5:5000 but I thought it would get the upstream from nginx, at least that ws my understanding from issue #12
oauth2_proxy does not get upstream from the nginx config. Those upstream{}
blocks in your nginx config are not being used for anything (the place to use them is in proxy_pass
lines).
That's a rather complicated nginx config, which I have seen other people post examples similar to, but it's not at all what I was suggesting.
Oh right... I must have misunderstood you, I had thought you were referring to the upstream in oauth2_proxy passing back to the correct upstream in nginx. Thats the last bit I haven't worked out with the above nginx. How to pass to the correct upstream in nginx AFTER oauth2_proxy has authenticated the user (at the moment everything goes to the upstream in the oauth2_proxy config, and you can only have one upstream specified by port, not path). I'll ammend the nginx to use the upstreams properly, do you have a tip on passing to the correct end point after oauth2_proxy (I couldn't see how to do it from the two issues you linked to).
Really trying to avoid giving a specific nginx config example, but ok, here's what I was suggesting:
server {
listen 80;
server_name dashboard.example.com;
location / {
proxy_pass http://127.0.0.1:4180/;
}
}
server {
listen 80;
server_name internal.example.com;
location / {
proxy_pass http://127.0.0.1:4180/;
}
}
server {
listen 127.0.0.1:5180;
server_name dashboard.example.com;
location / {
proxy_pass http://172.17.0.6:9000/;
}
}
server {
listen 127.0.0.1:5180;
server_name internal.example.com;
location / {
proxy_pass http://172.17.0.5:5000/;
}
}
The Host
header will (by default) not be modified by nginx or oath2_proxy as they proxy requests. So it will continue to work to distinguish between requests for the different upstream services. oauth2_proxy will (by default) use that Host header when forming the redirect-url.
No request rewriting to stash hostname in path and get it back out again should be necessary! No special oauth path handling should be necessary! oauth2_proxy will either do the oauth or do the proxying as appropriate. It's a proxy for the actual service.
It looks like the services which oauth2_proxy is proxying to are on different IPs, so I just want to clarify that those IPs/ports should not be publicly accessible, otherwise oauth2_proxy can be trivially bypassed.
@ploxiln
I understand much better now. How come this isn't the official way of doing this? It seems much simpler than the URL rewriting method...
The only thing Im not sure now is what the redirect_url should be. You might have left it out of your nginx example purposefully and in that case I am assuming I create another end point (say auth.example.com
) that I point google's oauth redirect to, and then in nginx config, I just set auth.example.com
to proxy_pass
to oauth2_proxy on port 4180?
Or have I missed the point again?
EDIT: I did try the above
server {
listen 80;
server_name auth.example.com;
location /oauth2/callback {
proxy_pass http://127.0.0.1:4180/;
}
}
However that leaves me in a loop, going back to the login page.
EDIT 2: The more I think about it that cant work as then the hostname is incorrect as to which internal app to pass the client to. I notice on google console when setting up the callback Google side, you can set multiple redirect_urls, so does that mean you need multiple in the proxy aswell?
On the google side you can set up multiple allowed redirect_urls. The authenticating application specifies, during the oauth exchange, what redirect_url is for that exchange, and google will only allow it if it's in the list of allowed redirect_urls. For this example, you'll need two allowed redirect_urls in that list, one for each application domain.
There should be no extra nginx location blocks specifically for /oauth2
. If the user is trying to access dashboard.example.com, they'll hit oauth2_proxy, which will see that they're not authenticated (no valid cookie), send them through the oauth2 scheme with a redirect_url of http://dashboard.example.com/oauth2/callback if you haven't specified redirect_url, because it uses the Host header by default. That URL will of course be served by oauth2_proxy, like all other urls under dashboard.example.com. Fully authenticated and cookie'd, the user will be redirected to dashboard.example.com/, which will hit oauth2_proxy again, which will see that the user has a valid cookie and proxy the request to nginx, which will look at the Host header and proxy the request to the actual application.
OK excellent, so according to the documentation, if I don't specify redirect_url
I should get the Host passed through. I can see in the cfg file that this should be the case, however When I log in I get:
I.e its being set to the localhost for some reason. I can see that redirect_url=""
in the cfg file (I also tried removing it entirely) but to no avail. Any suggestions as to how I can debug this?
It turns out you do need proxy_set_header Host $host;
, nginx's proxy_pass by default changes the Host header so you do need to set it back. Sorry about that
Sweet I think that worked.
Im getting a 500 but the correct domain in the error, but I have the upstream set to http://127.0.0.1:5180
so I think it must be the app. I can curl
it locally though
Thanks, I can probably dig a bit....
EDIT: I can see in the proxy logs
2016/05/26 22:38:44 reverseproxy.go:184: http: proxy error: dial tcp 127.0.0.1:5180: connection refused
So is that to say nginx isnt listening on 5180 or something? I have turned selinux off for the time being...
In nginx, multiple server blocks can use the same port, and nginx can decide which server block to use for that request based on the hostname. Try to confirm that nginx is listening on port 5180 (and is on the same server as oauth2_proxy of course).
sweet i cracked it. I'll write up a summary of the above and post here. Its might be quite useful for others who want to protect multiple applications
Hey guys, I have tried configuring oauth2_proxy to do this, but I get a 400 Bad Request The plain HTTP request was sent to HTTPS port
oauth2_proxy is configured using...
http_address = "127.0.0.1:4180"
upstreams = "http://127.0.0.1:5180"
request_logging = true
pass_host_header = true
email_domains = [
"xxxxxxx.com"
]
client_id = "xxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com"
client_secret = "xxxxxxxxxxxxxxxxx"
cookie_name = "_oauth2_proxy"
cookie_secret = "XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX"
and the NGINX is configured....
server {
listen 443 default ssl;
server_name www.mydomain.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
add_header Strict-Transport-Security max-age=2592000;
access_log /var/log/nginx/scratchpad.access.log main;
error_log /var/log/nginx/scratchpad.error.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:4180/;
}
}
server {
listen 127.0.0.1:5180;
server_name www.mydomain.com;
add_header Strict-Transport-Security max-age=2592000;
access_log /var/log/nginx/oauth.scratchpad.access.log main;
error_log /var/log/nginx/ioauth.scratchpad.error.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://10.0.1.142/;
}
}
I feel like I am almost there, but I just can't get it figured out.
Thanks!
Something is sending a plain-http request to a port expecting https. Look in nginx and oauth2_proxy logs for a hint as to where exactly this is happening, then you might be able to figure out which side is confused.
BTW this is odd: proxy_pass https://10.0.1.142/;
because how can it validate the TLS certificate used by 10.0.1.142
?
hey @amlwwalker did you find out the final working version, can you share your config, thanks
I was able to do this setup using the great documentation of https://github.com/18F/hub/blob/master/deploy/SSO.md
I had to adjust the redirect:
-location "~^/(?<target_host>[^/]+).18f.gov/(?<remaining_uri>.*)$" {
- rewrite ^ $scheme://$target_host$remaining_uri;
+location "~^/(?<target_host>[^/]+).company.com/(?<remaining_uri>.*)$" {
+ rewrite ^ $scheme://$target_host.company.com$remaining_uri;
}
Hello,
Sorry to say that but the solution with the nginx upstream and rewrite rule is a mess... I would prefer a solution directly in the oauth2_proxy with upstreams setup based on domain or sub domain.
We tried to protect 5 backend applications with oauth2_proxy and the nginx conf is very unmaintainable.
Hi All,
Old issue, but here's how I solved this using auth_request and no funky rewriting:
Nginx config (all in one block + lots of duplication to make it easy to follow):
server {
include ssl/ssl.conf
listen 80;
listen 443 ssl http2;
server_name subdomain1.example.com;
location /oauth2/ {
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Auth-Request-Redirect $request_uri;
}
location /
{
auth_request /oauth2/auth;
error_page 401 = /oauth2/sign_in;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://backend1/;
}
}
server {
include ssl/ssl.conf
listen 80;
listen 443 ssl http2;
server_name subdomain2.example.com;
location /oauth2/ {
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Auth-Request-Redirect $request_uri;
}
location /
{
auth_request /oauth2/auth;
error_page 401 = /oauth2/sign_in;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://backend2/;
}
}
And these are the relevant options we're using to run oauth2_proxy:
./oauth2_proxy \
--email-domain="example.com" \
--request-logging=True \
--provider="<snipped>" \
--client-id="<snipped>" \
--client-secret="<snipped>" \
--http-address="127.0.0.1:4180" \
--cookie-secret="$(cat /dev/urandom | env LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)" \
--set-xauthrequest=True \
--cookie-domain=".example.com" \
Cheers, Blake
And these are the relevant options we're using to run oauth2_proxy:
@blakemartin How do you go around not setting upstream
?
@nardeas - apologies, I missed this somehow. Got around it by using auth_request within the nginx config and letting nginx handle all the subdomain logic.
@blakemartin can you please post a template on how you are using auth_request, i am getting infinite login loop.
@KaustubhKhati - I posted an example config above. Please give that a try; if you're still having issues I can email you.
Can this be used to handle multiple sub domains at once? For instance, if Nginx is used to route x.company.com and y.company.com can this be used as authentication for both, and on successful auth it passes the connection to the right application? I can see you can have multiple upstreams, but it looks like you can only have one
redirect-url