ConservationMetrics / guardianconnector-views

A Nuxt.js tool that reads data from a SQL database and renders it on different views (map, gallery, alerts dashboard)
MIT License
3 stars 0 forks source link

Next steps for auth0 setup with Superset #9

Closed rudokemper closed 7 months ago

rudokemper commented 9 months ago

Following #8, these are the next steps to get iframe embedding of guardianconnector-views working in Superset:

rudokemper commented 9 months ago

It turns out that auth0 has implemented clickjacking protection for universal login, which prevents the login process from being embedded in an iframe. There is a workaround of downgrading to Classic Universal Login and turning off the Disable clickjacking protection for Classic Universal Login setting, as noted in the documentation link.

A good number of auth0 users have requested that it be possible to permit logging in via iframe for specified domains (c.f. here, here, and here) but no action has been taken by the auth0 team as of yet. Hence, I opted to downgrade the auth0 tenant to Classic Universal Login for now, and that solved the issue.

image

However, the next issue is that third party authentication providers like Google and Github have their own frame restrictions e.g. x-frame-options: deny, leading to the same iframe clickjacking prevention issues further downstream in the process. (Side note: interestingly, Google Chrome does not seem to have this issue with Google authentication, but other browsers certainly do, and other services like Github trip up on this issue in all of the browsers.)

This leads me to consider that we should break out of the iframe for authentication. I believe the reason why this embedded authentication process was working for me last week is that the auth0 token (auth._token.auth0) was still being stored in localStorage, and it was passed to the iframe header upon load so everything worked seamlessly -- I simply must have forgotten to clear it. Hence, I conclude that our best option might be to find a different way to get an auth0 token into localStorage during Superset authentication or somewhere else within Superset. Some options that come to mind:

  1. Insofar as both Superset and Views are on the same tenant and the auth0 token is the same, find a way to persist auth._token.auth0 in localStorage upon Superset login so that it will be passed to Views. This may involve messing with flask_appbuilder settings in superset_config.py since that is what handles authentication for Superset. PROS: everything is managed server-side and it's a seamless user experience. CONS: we are getting deeper into customizing our own Superset build and thus increasing our maintenance overhead, and it may be tricky to accomplish.

  2. Within Superset, add a link to authenticate with auth0 in a new window (popup or tab), and ensure that the auth token is passed back to the Superset window. PROS: This is the more standard practice in which other applications handle authentication, and it is supported by all of the third party services included auth0 itself. CONS: it breaks the user's flow by requiring them to engage with a popup or new tab, and it's not very intuitive to have to click somewhere else to authenticate within the iframe context. There could also be some complications with making sure the iframe ends up retrieving the auth0 headers. Also, this will introduce issues with loading Views directly (i.e. when it is not embedded in an iframe).

  3. Do the same as (2) but instead initiate the process from within the iframe; e.g. in Views, open a new tab/popup window to do authentication. I confirmed that it's at least possible for the browser to open a new tab from within an iframe, as a start. PROS: This is slightly more intuitive from a UX perspective, as the action required is provided within the same context. CONS: it still disrupts the user's flow a little. There may also be some unforeseen technical challenges similar to those of (2).

I expect (3) is the path of the least resistance; thoughts welcome.

IamJeffG commented 9 months ago

Agreed, I do not want to do 1 for the reasons you stated. Besides that the solution is the least understood (to us, right now), security restrictions are going to become more strict over time, not less -- so I could conceive that our "solution" ends up being short-lived, perhaps. And we don't have any easy or automated way to continue to test that it still works as time goes on, besides that one day somebody will notice it and maybe tell us. (also you've seen that this is easy to miss due to cookies lingering from before)

I don't have an opinion about 2 vs 3. I think the important thing shared by both of them is that the user doesn't need a separate set of logins. The only difference is in the mechanics of actually utilizing those credentials on both apps, and I guess I'd suggest to just do what you think will be better and easier.

I'd also submit option 4, a variation of 2 and 3, in which both apps continue to share an auth0 tenant but we give up on loading Views inside a Superset iframe at all. In this case, Superset might contain a hyperlink you click to open Views in a new tab (its login page, map, and gallery are all in that tab). Sure it's neat if the maps appear inside Superset, but I am not sure people are really going to expect that. Everyone has had to open a link in a new tab before. Okta and many password managers operate like this. PROS: dead-simple, no unknowns, in fact it's already done. CONS: less elegant.

rudokemper commented 9 months ago

Thanks Jeff. After some thinking, I opted for another variety (5): Still embed Views, but show a notice that if the View doesn't show up and/or the authentication link in the iframe doesn't work (i.e. if they are on a different browser than Chrome), then to click on the hyperlink (with target=_blank) to open the View in a new tab, where authentication will definitely work. They can explore the data there, or refresh the page on Superset once they've authenticated. I think this is a good middle ground for now. The apps are secure with the same set of keys, which is a win, and we can always revisit this after getting some user feedback on whether it's a big annoyance for them or not.

image

rudokemper commented 8 months ago

(Continuing the thread on overall authentication work here, in lieu of not having a better centralized place on Github for cross-cutting BCM work as of yet):

Filebrowser provides three authentication methods: JSON Auth, Reverse Proxy, or None. We are currently using the default JSON Auth.

OAuth is not directly supported by Filebrowser currently. In August 2021, one of the maintainers mentioned it would be part of V3. As of today, they are on V2.2.25 and I don't see any indication on Github of a possible V3 release coming in the near future.

However, the proposed workaround (as mentioned in that issue) is to use a reverse proxy, which to my understanding would entail needing to use a web server like nginx. It might get thorny based on the high number of issues by users implicating nginx config being part of their problem, although these could be red herring since they are not about authentication specifically. We could take advantage of an nginx-proxy Docker container but we'd be dipping our toes deeper into customizing our Filebrowser deployments like we do for Superset.

Since we're currently only using Filebrowser for storing files for the Data Lakehouse and to leverage the URI for media embedding, and not sharing the tool with partners for their own usage, maybe it's best to just using a well-secured username and password using the default JSON Auth method? At least for the time being, until auth0 is natively supported or less of a potential snake pit of unforeseen issues.

IamJeffG commented 8 months ago

nginx is both an extremely complicated piece of software, and also pretty widely used and well documented. I feel like it's a bit of a rabbit hole (mostly meaning I have not wrapped my head around it) and do not look forward to maintaining it. But if you want to give it a go, I think it would be ok. There's not a great way to do it in Azure App Service, it would have to look something like:

  1. Write an nginx config - you are on your own here :smile:
  2. Plop the nginx config file into our Azure storage container -- it can be read-only so blob storage or File storage is fine.
  3. Configure App Service to mount that storage path into the app container (Configuration → Path Mappings).
  4. Switch App Service to a multi-container deployment (Deployment Center → Container type: docker compose).
  5. The two containers are nginx (the 3rd party image) and filebrowser. In the YAML config for nginx, include a bind mount to the mounted path where that config file is, so that nginx will use this config when you startup.

This is the sort of thing we would have to figure out anew after moving to GCP (which I still think we should)

Or, I'm also fine keeping the JSON auth as-is until Filebrowser v3 is out. Or we could look for a different piece of software that does something similar and supports OAuth.

rudokemper commented 7 months ago

Clearly, I went with "keeping the JSON auth as-is until Filebrowser v3 is out." :smile:

Or we could look for a different piece of software that does something similar and supports OAuth.

I think Dd started to look at some different solutions for Kakawa, too. We can follow their research, and make a decision later on.

Closing this for now.