supabase / supabase-js

An isomorphic Javascript client for Supabase. Query your Supabase database, subscribe to realtime events, upload and download files, browse typescript examples, invoke postgres functions via rpc, invoke supabase edge functions, query pgvector.
https://supabase.com
MIT License
2.86k stars 220 forks source link

v2.0.0 regression: Custom JWT (particularly in realtime Channels) #553

Closed LuisAngelVzz closed 8 months ago

LuisAngelVzz commented 1 year ago

Bug report

Description

Using a custom JWT was a feature in v1 : https://supabase.com/docs/reference/javascript/auth-setauth

Per comment: https://github.com/supabase/gotrue-js/pull/340#issuecomment-1218065610 what we did was establish the custom header and it works for creating the client.

However, it's not working with realtime channels. Upon inspection, the problem is that the internal initialization isn't setting up RealtimeClient's setAuth method: https://github.com/supabase/realtime#realtime-rls which in turn causes the web socket to not send the JWT on the messages and heartbeats..

The result is that channels fail the RLS.

A second problem is with setting up a new valid JWT when the previous one expired: there's no way to do it. Not in the supabase client and much less in the realtime client.

To Reproduce

  1. Create a database with a restrictive policy based on a custom JWT
  2. Create a JWT with a 1 minute expiration time
  3. Create a client using the method explained in https://github.com/supabase/gotrue-js/pull/340#issuecomment-1218065610 4.a. Let 2 minutes go by and try to make any call to supabase using supabaseClient. Access is denied because the JWT has expired. Try to update the JWT -> there's no way to do it. Would have to create a new client but that defeats the whole thing. 4.b. Create a channel subscription -> 'fails' by not providing access to rows to which the user has access acording to the policy)

Upon inspection of the websocket, as it was expected, the messages and heartbeats don't include the custom JWT.. Why whould they? We've only set the header /directly/ and that's it..

Workaround

What we've done is changing from protected to public the supabase client's class elements: "headers" in SupabaseClient "realtime" in GoTrueClient and we're calling a) supabaseClient.realtime.setAuth(JWT) b) supabase.auth.headers.Authorization = Bearer ${JWT}; with the customToken..

That keeps everything working..

Tried forking and creating an updateJWT method, but realized we're not very familiar with the modularization philosophy of the project and was most likely being both overkill about it. Also, were falling short because an alternate method is quite likely required for the initialization, since using the headers option in the createClient doesn't impact the realtime client.

magicseth commented 1 year ago

What I was hoping for was a SignInWithToken(...) api that would take the JWT, create a session, and call the authstatechanged callbacks.

As is it feels daunting reaching in to protected fields in multiple files to achieve this.

jmedellinc commented 1 year ago

Any news on this? It's a regression.. Any way to get the devs attention?

soedirgo commented 1 year ago

cc @kangmingtay @w3b6x9

ofeenee commented 1 year ago

I'm looking for something like this. It'll solve so many problems I'm having right now.

w3b6x9 commented 1 year ago

This was actually due to an issue where the Realtime client would always default to using the anon token. I released a change where Realtime client will use the global headers Authorization bearer token if present: https://github.com/supabase/supabase-js/releases/tag/v2.7.0

w3b6x9 commented 1 year ago

I do want to mention that if your custom token is short-lived and you're refreshing it yourself you'll still have to call Realtime's setAuth in order to send the refreshed token to Realtime's servers.

jmedellinc commented 1 year ago

I do want to mention that if your custom token is short-lived and you're refreshing it yourself you'll still have to call Realtime's setAuth in order to send the refreshed token to Realtime's servers.

Ha..! Thanks..

I wonder if the issue of a customJWT refresh method is on the todo's ? Not urgent.. My workaround works without issues.. (There may be something I may be missing, though..)

It's all documented on https://github.com/supabase/gotrue-js/issues/701 along with patches to unprotect the applicable class properties:

this.supabase.headers.Authorization = Bearer ${supabaseToken};
this.supabase.auth.headers.Authorization = Bearer ${supabaseToken};
this.supabase.rest.headers.Authorization = Bearer ${supabaseToken};

which have to be updated in addition to calling setAuth again when the JWT expires..

w3b6x9 commented 1 year ago

This was actually due to an issue where the Realtime client would always default to using the anon token. I released a change where Realtime client will use the global headers Authorization bearer token if present: https://github.com/supabase/supabase-js/releases/tag/v2.7.0

After taking a closer look the issue is on the API gateway. It only accepts Supabase signed tokens (i.e. anon, service_role).

I just added a section to Realtime docs on how to pass in custom tokens: https://supabase.com/docs/guides/realtime/postgres-changes#custom-tokens.

yamen commented 1 year ago

@w3b6x9 We've just been testing this and want to highlight that the docs are slightly off. They say to pass the Supabase key into headers and the custom JWT into params, whereas it's actually the other way around.

What worked for us is:

const { createClient } = require('@supabase/supabase-js')

const supabase = createClient(process.env.SUPABASE_URL, process.env.SUPABASE_KEY, {
  realtime: {
    headers: {
      apikey: 'custom-supabase-signed-jwt-token',
    },
    params: {
      apikey: 'supabase-anon-key',
    },
  },
})

Whereas the docs say:

const { createClient } = require('@supabase/supabase-js')

const supabase = createClient(process.env.SUPABASE_URL, process.env.SUPABASE_KEY, {
  realtime: {
    headers: {
      apikey: 'supabase-signed-token',
    },
    params: {
      apikey: 'your-custom-token',
    },
  },
})

Note that the description of the Supabase anon key in the docs is also confusing:

"The apikey in headers must be either the anon or service_role token that Supabase signed"

Other than this needing to be the apikey in params not headers, it should probably also say "that Supabase provides" rather than "that Supabase signed". This caused a mini goose chase when debugging.

evelant commented 1 year ago

@w3b6x9 I'm migrating to Supabase from Firebase. I'm keeping firebase auth for now since Supabase doesn't have anonymous auth yet. To summarize and clarify the issues based on the above, here's what I currently do:

  1. Sign in the user with firebase auth
  2. Use an edge fn to verify the user's Firebase token then return a new token signed with the Supabase jwt key
  3. Create a client with the following config
    const db = createClient<Database>(creds.url, creds.key, {
        auth: authOpts,
        global: {
            headers: {
                authorization: `Bearer ${key}`, // my custom token
            },
        },
        realtime: {
            headers: {
                apikey: key, // my custom token returned from my edge fn --docs have it backwards
            },
            params: {
                apikey: creds.key, // the supabase anon key
            },
        },
    })

This method works up to the point where I need to refresh the user's token, which I want to do whenever their firebase token gets refreshed. I can call my edge fn to verify the new firebase token and generate a new supabase token, but then as @LuisAngelVzz pointed out there isn't a great way to set this new token in the existing client. For now I will use a workaround like was posted above.

I think there are two things needed to solve these problems:

  1. Provide a method on supabase-js to explicitly handle setting a new custom auth token on an existing client
  2. And/or -- provide a signUpWithCustomToken/signInWithCustomToken option as suggested by @magicseth to create a real gotrue session based on the custom token.

Are either of these things currently planned to be implemented? If so, when?

I hope this effectively summarizes the issue. IMO this is likely to be a problem faced by almost everyone attempting to migrate from Firebse so I hope it can get bumped up to a higher priority by the Supabase team.

Mukhammadali commented 1 year ago

Here is how I made my custom JWT work with RLS + REALTIME.

I debugged this for a long time and found out that there was an issue in supabase-js package (The actual issue can be in realtime-js or go-true.js packages).

Issue: If you check line numbers 291 & 296 in this file, you can see that if you are using a supabase auth and if the user is logged in, then they pass access token to realtime client using this.realtime.setAuth(token) but if you are using custom authentication and custom JWT token then there is no way to pass that token using setAuth.

Workaround solution: Add these lines of code below line:124 and re-bundle the package (OR, simply patch it locally using method mentioned below)

 this.realtime = this._initRealtimeClient({ headers: this.headers, ...settings.realtime })
 if(settings.realtime.params.accessToken){
     this.realtime.setAuth(settings.realtime.params.accessToken)
  }

Then pass the accessToken inside realtime.params like so

const getSupabase = ({access_token}: {access_token: string}) => {
  const supabase = createClient(
    'instance_url',
    'anon_key',
    {
      global: {
        headers: {
          Authorization: `Bearer ${access_token}`,
        },
        fetch: (url, options) => customFetchThatRefreshesAccessToken(url, options), // you don't need this if you don't want to handle refreshing your access token here
      },
      realtime: {
        params: {
          // This does not exist in Supabase doc, because this is a custom param that is handled by the local patch to @supabase/supabase-js library
          // apikey was not working for some reason, so I had to use this workaround.
          accessToken: access_token,
        },
      },
    },
  );

  return supabase;
};

If you want to patch it locally, you can use patch-package and apply this patch. patch name: @supabase+supabase-js+2.12.1.patch (This is the patch for v2.12.1)

diff --git a/node_modules/@supabase/supabase-js/dist/main/SupabaseClient.js b/node_modules/@supabase/supabase-js/dist/main/SupabaseClient.js
index 094b3b0..cfd033b 100644
--- a/node_modules/@supabase/supabase-js/dist/main/SupabaseClient.js
+++ b/node_modules/@supabase/supabase-js/dist/main/SupabaseClient.js
@@ -81,6 +81,9 @@ class SupabaseClient {
         this.auth = this._initSupabaseAuthClient((_e = settings.auth) !== null && _e !== void 0 ? _e : {}, this.headers, (_f = settings.global) === null || _f === void 0 ? void 0 : _f.fetch);
         this.fetch = (0, fetch_1.fetchWithAuth)(supabaseKey, this._getAccessToken.bind(this), (_g = settings.global) === null || _g === void 0 ? void 0 : _g.fetch);
         this.realtime = this._initRealtimeClient(Object.assign({ headers: this.headers }, settings.realtime));
+        if (settings.realtime.params.accessToken){
+            this.realtime.setAuth(settings.realtime.params.accessToken)
+        }
         this.rest = new postgrest_js_1.PostgrestClient(`${_supabaseUrl}/rest/v1`, {
             headers: this.headers,
             schema: (_h = settings.db) === null || _h === void 0 ? void 0 : _h.schema,
evelant commented 1 year ago

For reference here is how I'm updating my supabase client whenever my firestore token gets refreshed.

/**
 * Hax to set new auth tokens on the supabase client when we generate a new token from a firebase token
 * See these issues
 * https://github.com/supabase/gotrue-js/issues/701
 * https://github.com/supabase/supabase-js/issues/553
 */
export const refreshSupabaseTokenFromFirebaseToken = async (
    db: TaskHeroSupabaseClient | null,
    firebaseToken: string
) => {
    const key = await getTokenFromFirebaseToken(firebaseToken)
    if (!db) {
        throw `db not initialized yet`
    }
    db.headers.Authorization = `Bearer ${key}`

    db.realtime.setAuth(key)

    //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
    db.realtime.headers.apikey = `Bearer ${key}`
    //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
    db.auth.headers.Authorization = `Bearer ${key}`
    //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
    db.rest.headers.Authorization = `Bearer ${key}`

    return key
}

You can use @ts-ignore if you don't feel like patching the .d.ts files. protected doesn't do anything at runtime so the fields are accessible, it's just TS that will compain when you try to access them.

It's also worth noting if you want to use firebase auth that firebase-js onIdTokenChanged doesn't actually get called when a token expires. You need to manually check the firebase token expiry and refresh your custom token before it expires. I use the user.getIdTokenResult() API to get the decoded token from the firebase API then periodically check the claims.exp field (and check anytime my react-native app is foregrounded) against the current time to see if expiration is near. If expiration is within 10 mins I call user.getIdTokenResult(true) to force a refresh and use the resulting token to refresh my custom Supabase token.

rdylina commented 1 year ago

As someone attempting to use a third-party auth, Clerk in this case, here are my thoughts about the direction of implementation for custom tokens. Bear with me here, I'm no master developer, so these are just my thoughts.

There seems to be a push to just create a new supabase client for every call and load the token at that time.

Here are the things that I see that are problematic with this approach:

  1. If you have a short-lived token, you have to go fetch or compute the new token for every single call, which can add intense levels of latency to your DB calls. In my case, I have to serve up the token from Clerk, which takes 80-130ms roundtrip, which is now stacked every single time on top of the latency of making the actual db call from supabase.
  2. There is also just the straight-up machine overhead of creating new instances of an object over and over again instead of persisting an instantiation of it and just updating the key. De-allocating that memory and re-allocating along with all the other trace logic for every single call is bad practice but not inherently an issue on very small scales. BUT on a large application, this multiplies dramatically. Imagine an application with 5,000 active users and the sheer volume of wasted CPU and memory cycles if you're doing this on the server side. Nightmare. Gives me nightmares equivalent to edge function cold starts on every call. 🤮
  3. Connection pooling, I don't know if the supabase-js library supports the automatic pooling of requests. If it does or ever plans to in the future, this implementation would completely eliminate that possibility to pool because you have a total fresh client instantiation for every single call.

Non-approached based notes of the current state of custom tokens:

  1. Unless someone else has found a native route in the current release that doesn't involve significant patching and overrides, Realtime just doesn't seem to function at all with custom tokens once RLS is enabled. I can confirm the docs are indeed backward in terms of where to place your anon and custom token in headers v params. The switch up not by intention/design but in functionality/existing codebase it just does not work if you place the keys in the order defined in the docs. It will fail to open the websocket period. However, even when you reverse the tokens and are able to now retrieve data via realtime, once you enable RLS, it is full stop, even with a fully permissive RLS policy. No updates are shipped to custom token subscribers. Maybe I'm missing something here, but I've been round and round with no luck. @evelant have you had any luck with realtime once RLS is enabled? I suspect that realtime is refusing to acknowledge the aud and role from the custom token claims and thus isn't treating the realtime client as an authenticated user. 🤷‍♂️

Design Opinion on Ideal solution: As some others have pointed out, it seems to be the ideal outcome here is really to sidestep this process by allowing login to supabase with the custom token. So effectively leveraging supabase auth but exclusively for database access purposes when using a third party auth or custom token. Once logged in you could utilize the existing token infrastructure that supabase uses directly instead of getting a new token from your auth provider every single call or upon expiration and handling all that logic yourself. In the case of a third-party auth provider, the one drawback here is obviously that supabase would not know when the user is logged out of their auth provider, but I suppose if that is important somehow to prevent the user from continuing to be able to pull data, you could implement a heartbeat edge function that checks with the auth provider to see if their session is expired?

That's just my 2 cents on that. I apologize in advance if that's a dumb idea, I'm not super well versed in the security considerations of JWT authentication.

Unfortunately, because I am planning for a production app, I am not comfortable applying the very innovative hacks that others have put together here form @Mukhammadali and @evelant . So I am now left with a choice of losing my chosen Auth implementation, losing supabase, or coming up with some hyper custom implementation using a self-designed middleware.

Any thoughts @w3b6x9 @kangmingtay @silentworks ?

evelant commented 1 year ago

@rdylina After digging into it a little bit it seems the issue, at least for me, isn't RLS specifically. Instead the problem is that realtime postgres change listeners are authenticated as anon, despite me setting the headers as suggested.

Requests from postgres-js work fine with the token, they get the proper authenticated role and sub. Realtime however does not appear to use the custom token and treats the connection as anon so it probably fails most RLS rules preventing anonymous access.

@w3b6x9 having realtime work with a custom token is a requirement for us to switch from firebase to supabase. We can't migrate our auth yet so we have to continue using firebase auth which means we need to have custom tokens working or we can't migrate to Supabase. Any further pointers or input on this issue would be greatly appreciated!

edit: looking at the realtime.subscription table I can see that the claims column is bogus for all my realtime listeners. It's {"exp": 1983812996, "iss": "supabase-demo", "role": "anon"} instead of the actual custom token I'm sending from the client.

I also see this in the docker logs for the local realtime container when I try to start a subscription without giving anon privilege

Subscribing to PostgreSQL failed: {:error, "Subscription insert failed with error: ERROR P0001 (raise_exception) invalid column for filter docId. Check that tables are part of publication supabase_realtime and subscription params are correct: %{\"event\" => \"*\", \"filter\" => \"docId=eq.212b057f-7e32-4c40-aa90-8397df960194\", \"schema\" => \"public\", \"table\" => \"logChunks\"}"}
evelant commented 1 year ago

OK I seem to have gotten it working. Not sure why this is working, it's not even close to what the docs suggest.

If I create a client like this -- note I commented out any realtime config

    const db = createClient<Database>(creds.url, creds.key, {
        auth: authOpts,
        global: {
            headers: {
                authorization: `Bearer ${key}`,
            },
        },
        // realtime: {
        //     // headers: {
        //     // apikey: `Bearer ${key}`,
        //     // },
        //     params: {
        //         // apikey: creds.key,
        //         accessToken: `Bearer ${key}`,
        //     },
        // },
    })

    //@ts-ignore - this is a private method but we must call it to make auth work on realtime channels with our custom token
    db.realtime.setAuth(key)

then call realtime.setAuth(key) and now realtime gets the correct info from my custom token. RLS works and realtime.subscription shows the correct claims {"aud": "authenticated", "exp": 1680230955, "sub": "zsGXfLygdCt2OF4Vt0hGrodAuK7a", "role": "authenticated", "app_metadata": null, "user_metadata": null}

I'll need to test this some more (specifically token refresh) but this may be a good enough workaround until the problems with custom tokens get fixed.

rdylina commented 1 year ago

@evelant I suspected that realtime wasn't accepting the token but didn't have the depth of expertise to verify it, good to know that is what is actually happening. Still very much dislike that we're overriding a private method to set the refreshed token.

Unfortunately, I'm still in a spot where we can't properly update the regular client token without overrides as well. =(

Update: I can confirm your workaround. Realtime functions as expected with the token manually pushed into that private method.

rdylina commented 1 year ago

After tooling around all evening with this, here is what I've come up with as an ugly work around. I'm using react query to perform interval updates of the token and to account for failures, refetch on mount, and window refocus, etc.


const { data, error } = useQuery({
      queryKey: ["getToken"],
      queryFn: () =>
         getToken({
            template: "supabase-nextjs-boilerplate",
         }),
      enabled: isSignedIn,
      refetchInterval: 50 * 1000,
      refetchIntervalInBackground: true,
      onSuccess: (data) => {
         if (data) updateToken(data);
      },
   });

   function updateToken(newToken: string) {
      if (newToken) {
         const now = new Date().getTime();
         const exp = parseJwt(newToken).exp * 1000;
         const secondsToExpiration = exp - now;

         supabaseClient?.functions.setAuth(newToken);
         // @ts-ignore this is set as a private method but we need to modify it to work around lack of custom token refresh
         supabaseClient?.realtime.setAuth(newToken);
         //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
         supabaseClient.auth.headers.Authorization = `Bearer ${newToken}`;
         //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
         supabaseClient.rest.headers.Authorization = `Bearer ${newToken}`;

      }
   }

The expiration calculations were just for debugging purposes to ensure I was receiving a new token. There are edge cases I'm not covering I'm sure, but this appears to function in the absence of a supported solution that doesn't involve suspending TS protections.

evelant commented 1 year ago

@rdylina That looks like a good workaround. Pretty much identical to what I'm doing. I'm going to test it further today and will report back here if I find any quirks in this method.

@w3b6x9 doesn't seem to be responding here but I think they're following along, they've removed the protected from supabase.realtime. https://github.com/supabase/supabase-js/commit/6c61c9de0c9c5ad4aa4d8d1a2bf722c8e650d120

evelant commented 1 year ago

@rdylina Frustratingly this workaround doesn't seem to actually work. Once I've refreshed my token I still get invalid token failures. JWSError (CompactDecodeError Invalid number of parts: Expected 3 parts; got 5). As far as I can tell from reading all the code in the libraries this workaround should work so I'm definitely missing something.

@laktek @hf @soedirgo @J0 Could you provide some input on this issue please? Custom tokens + refresh is the final blocker for my team to finish migrating from Firebase to Supabase.

jmedellinc commented 1 year ago

https://github.com/supabase/gotrue-js/issues/701

Checkout my workaround. Have been using it for months. You may need to adjust the patches a bit since I haven’t small version upgraded my install

evelant commented 1 year ago

Hmm, I'm already setting those fields on token refresh. As far as I can tell this should cover all the bases, but I'm still running into issues.

export const refreshSupabaseTokenFromFirebaseToken = async (
    db: MySupabaseClient | null,
    firebaseToken: string
) => {
    const key = await getTokenFromFirebaseToken(firebaseToken)
    if (!db) {
        throw `db not initialized yet`
    }
    //@ts-ignore protected field
    db.headers.Authorization = `Bearer ${key}`

    //@ts-ignore protected field
    db.realtime.setAuth(key)
    db.functions.setAuth(key)

    //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
    db.auth.headers.Authorization = `Bearer ${key}`

    //@ts-ignore this is set as a protected field but we need to modify it to work around lack of custom token refresh
    db.rest.headers.Authorization = `Bearer ${key}`

    return key
}

I'm going to run through all my token refresh code again to check for any silly mistakes.

rdylina commented 1 year ago

@rdylina Frustratingly this workaround doesn't seem to actually work. Once I've refreshed my token I still get invalid token failures. JWSError (CompactDecodeError Invalid number of parts: Expected 3 parts; got 5). As far as I can tell from reading all the code in the libraries this workaround should work so I'm definitely missing something.

@laktek @hf @soedirgo @J0 Could you provide some input on this issue please? Custom tokens + refresh is the final blocker for my team to finish migrating from Firebase to Supabase.

Very curious indeed. My workaround has been working so far the last couple of days.

Obviously not a 1 to 1 comparison since we're getting our tokens from different places. Hmmm..

evelant commented 1 year ago

Ok, my bad, it was a silly mistake, essentially a typo. I had authorization where I should have had Authorization. The workaround of manually issuing a new token and then setting it on the headers of the various client libraries seems to be working just fine now.

hf commented 1 year ago

Ok, my bad, it was a silly mistake, essentially a typo. I had authorization where I should have had Authorization. The workaround of manually issuing a new token and then setting it on the headers of the various client libraries seems to be working just fine now.

That's weird typically headers are case-insensitive. I'm glad you fixed it! 🎉

w3b6x9 commented 1 year ago

Hey folks, I wanted to address the current custom token situation with your hosted Supabase project's Realtime.

The Realtime docs regarding custom tokens here are correct: https://supabase.com/docs/guides/realtime/extensions/postgres-changes#custom-tokens.

I've also created this Replit as an example of it working: https://replit.com/@w3b6x9/RealtimeCustomToken. You can scroll to the bottom for database setup if you would like to reproduce it.

Here's how the Supabase provided anon token and your custom token are being used once passed in to supabase-js client.

Providing this here for convenient reference:

const supabase = createClient(process.env.SUPABASE_URL, process.env.SUPABASE_KEY, {
  realtime: {
    headers: {
      apikey: 'supabase-provided-anon-token',
    },
    params: {
      apikey: 'your-custom-token',
    },
  },
})

The Supabase anon token that is provided to your Supabase project is checked in our Cloudflare API gateway to make sure that the project is valid and active. Then the gateway forwards the request, along with your custom token found under realtime.params.apikey in the options object, to our Realtime servers.

Realtime servers will check your custom token and make sure it's signed by your project's JWT secret. If you're listening to Postgres changes it will then insert this custom token's claims into your project database's realtime.subscription table.

If you decide to reproduce my Replit example, you'll see in the realtime.subscription table that the claims is that of the custom signed token.

Like @evelant mentioned earlier (https://github.com/supabase/supabase-js/issues/553#issuecomment-1492242754) I have removed the protected flag so you can call supabaseClient.realtime.setAuth(token) when you want to pass Realtime a refreshed token without ts complaining.

Note that the description of the Supabase anon key in the docs is also confusing:

"The apikey in headers must be either the anon or service_role token that Supabase signed"

Other than this needing to be the apikey in params not headers, it should probably also say "that Supabase provides" rather than "that Supabase signed".

@yamen thanks, this is good feedback! will go and update the docs so it's more obvious.

kangmingtay commented 1 year ago

Hi @rdylina, just wanted to chime in on the feedback around the following:

If you have a short-lived token, you have to go fetch or compute the new token for every single call, which can add intense levels of latency to your DB calls. In my case, I have to serve up the token from Clerk, which takes 80-130ms roundtrip, which is now stacked every single time on top of the latency of making the actual db call from supabase.

I feel like this is the tradeoff for using another auth provider here. But then again, i would like to hear how we can improve this from you. We could think about adding a method to sign in with a custom token, but then verifying that the custom token is legit and dealing with the logout issue you mentioned don't seem trivial.

There is also just the straight-up machine overhead of creating new instances of an object over and over again instead of persisting an instantiation of it and just updating the key. De-allocating that memory and re-allocating along with all the other trace logic for every single call is bad practice but not inherently an issue on very small scales. BUT on a large application, this multiplies dramatically. Imagine an application with 5,000 active users and the sheer volume of wasted CPU and memory cycles if you're doing this on the server side. Nightmare. Gives me nightmares equivalent to edge function cold starts on every call. 🤮

Just to provide some context, we used to have a method called setAuth() which allowed one to reuse a single server-side supabase client across multiple requests (each new request would call setAuth() to set the custom token in the authorization header) which is not secure. Depending on the client use, it could lead to one user sending requests as another user. So even if we added it back, we’d still be advising that users create a new client per request.

In terms of memory usage, we are using this pattern of creating new instances of an object each time a request is made for the Supabase platform. We use our own auth and client libs and we aren't running into any of these performance issues mentioned. I've also done some napkin math for the memory consumed:

The createClient() call allocates ~250kb each time it's called. Assuming there are 10,000 calls / second to createClient() , it will only take up 0.0025gb in memory. Assuming you're using the smallest AWS instance type available (t4g.nano - 0.5gb memory) and that the node's GC runs at a frequency that's less than a sec (which is very lazy - node's GC tends to be on the more aggressive side), you would still have alot of unused memory left.

Connection pooling, I don't know if the supabase-js library supports the automatic pooling of requests. If it does or ever plans to in the future, this implementation would completely eliminate that possibility to pool because you have a total fresh client instantiation for every single call.

Are you referring to the database connection pooling here? We don't create a new connection to the database on every call. Each service has it's own fixed connection pool size and they use their own poolers. The client library just serves as a JS interface to interact with the RESTful APIs.

Hope that clarifies things and really appreciate the time and effort to provide such detailed feedback as well!

rdylina commented 1 year ago

The Supabase anon token that is provided to your Supabase project is checked in our Cloudflare API gateway to make sure that the project is valid and active. Then the gateway forwards the request, along with your custom token found under realtime.params.apikey in the options object, to our Realtime servers.

Realtime servers will check your custom token and make sure it's signed by your project's JWT secret. If you're listening to Postgres changes it will then insert this custom token's claims into your project database's realtime.subscription table.

Hello @w3b6x9 !

Unfortunately, this configuration continues NOT to work for me. Is there, by chance, some kind of conflict with the global header setting? I need to use this same object as both the REST API client and realtime. Hence I'm setting both the global bearer auth and the realtime. Is the global overwriting the realtime, perhaps?

rdylina commented 1 year ago

Hello @kangmingtay ,

Thank you for the reply. My thoughts are below.

Hi @rdylina, just wanted to chime in on the feedback around the following:

If you have a short-lived token, you have to go fetch or compute the new token for every single call, which can add intense levels of latency to your DB calls. In my case, I have to serve up the token from Clerk, which takes 80-130ms roundtrip, which is now stacked every single time on top of the latency of making the actual db call from supabase.

I feel like this is the tradeoff for using another auth provider here. But then again, i would like to hear how we can improve this from you. We could think about adding a method to sign in with a custom token, but then verifying that the custom token is legit and dealing with the logout issue you mentioned don't seem trivial.

Currently, I am solving for this by making my token longer lived (120 seconds) and then caching it and only requesting updated tokens based on the current token's expiration minus 5 seconds. Not the most elegant of solutions but it beats polling for every request.

There is also just the straight-up machine overhead of creating new instances of an object over and over again instead of persisting an instantiation of it and just updating the key. De-allocating that memory and re-allocating along with all the other trace logic for every single call is bad practice but not inherently an issue on very small scales. BUT on a large application, this multiplies dramatically. Imagine an application with 5,000 active users and the sheer volume of wasted CPU and memory cycles if you're doing this on the server side. Nightmare. Gives me nightmares equivalent to edge function cold starts on every call. 🤮

Just to provide some context, we used to have a method called setAuth() which allowed one to reuse a single server-side supabase client across multiple requests (each new request would call setAuth() to set the custom token in the authorization header) which is not secure. Depending on the client use, it could lead to one user sending requests as another user. So even if we added it back, we’d still be advising that users create a new client per request.

In terms of memory usage, we are using this pattern of creating new instances of an object each time a request is made for the Supabase platform. We use our own auth and client libs and we aren't running into any of these performance issues mentioned. I've also done some napkin math for the memory consumed:

The createClient() call allocates ~250kb each time it's called. Assuming there are 10,000 calls / second to createClient() , it will only take up 0.0025gb in memory. Assuming you're using the smallest AWS instance type available (t4g.nano - 0.5gb memory) and that the node's GC runs at a frequency that's less than a sec (which is very lazy - node's GC tends to be on the more aggressive side), you would still have alot of unused memory left.

My concern regarding memory wasn't really the maximum point in time utilization but the pure machine overhead of allocating the memory and then deallocating and/or garbage collecting. I come from a background in lower-level machine languages, and, for better or worse, it makes me cringe how flippant we as a group are with higher-level languages in the way we treat compute overhead. It's just my natural instinct to minimize overhead and maximize performant design patterns. 🙈 Maybe I'm over the top on this topic.

Connection pooling, I don't know if the supabase-js library supports the automatic pooling of requests. If it does or ever plans to in the future, this implementation would completely eliminate that possibility to pool because you have a total fresh client instantiation for every single call.

Are you referring to the database connection pooling here? We don't create a new connection to the database on every call. Each service has it's own fixed connection pool size and they use their own poolers. The client library just serves as a JS interface to interact with the RESTful APIs.

Hope that clarifies things and really appreciate the time and effort to provide such detailed feedback as well!

Thank you for the clarification. I wasn't sure if any of the REST apis support batched queries or if that was a thing that was ever on the roadmap.

I super appreciate you taking the time to write out such a detailed response.

kangmingtay commented 1 year ago

I come from a background in lower-level machine languages, and, for better or worse, it makes me cringe how flippant we as a group are with higher-level languages in the way we treat compute overhead. It's just my natural instinct to minimize overhead and maximize performant design patterns.

@rdylina welcome to the world of javascript 🙃 on a more serious note, we chose to go with that design pattern of calling createClient on every request for the security reasons i mentioned earlier. We want to ensure that the client lib design doesn't create a security vulnerability and that's worth protecting over trying to squeeze out abit more optimizations during memory allocation.

I wasn't sure if any of the REST apis support batched queries or if that was a thing that was ever on the roadmap.

well, this is already possible if you write your SQL statements in a function and use the supabase.rpc method (https://supabase.com/docs/reference/javascript/rpc)

w3b6x9 commented 1 year ago

The Supabase anon token that is provided to your Supabase project is checked in our Cloudflare API gateway to make sure that the project is valid and active. Then the gateway forwards the request, along with your custom token found under realtime.params.apikey in the options object, to our Realtime servers. Realtime servers will check your custom token and make sure it's signed by your project's JWT secret. If you're listening to Postgres changes it will then insert this custom token's claims into your project database's realtime.subscription table.

Hello @w3b6x9 !

Unfortunately, this configuration continues NOT to work for me. Is there, by chance, some kind of conflict with the global header setting? I need to use this same object as both the REST API client and realtime. Hence I'm setting both the global bearer auth and the realtime. Is the global overwriting the realtime, perhaps?

Hey @rdylina!

So we set it up so that Realtime's settings would overwrite global's and that includes headers: https://github.com/supabase/supabase-js/blob/fdb7bf56b313d1247409c97f7c8ed464ffca6238/src/SupabaseClient.ts#L119

What do you mean when you write that it doesn't work for you? Are you able to subscribe at all? Can you share your setup?

evelant commented 1 year ago

Annoyingly I'm still having problems with updating the client's headers when I refresh my custom token. My code posted above worked fine against the local supabase instance in docker but is failing against my live staging instance. I verified that I am getting a correctly signed updated token but unfortunately postgrest and edge function requests are failing after I attempt to update the client.

@w3b6x9 @hf @kangmingtay Could you please provide an example of how you would update a custom token on an existing client instance?

I'm really struggling with this and it's the last major piece of the puzzle necessary to transition my app to Supabase. We can't go live with the switch to Supabase until token refresh is working reliably.

kangmingtay commented 1 year ago

Hi @evelant, you can't update a custom token on an existing client instance, you will need to create a new client and set it like this:

const supabase = createClient(SUPABASE_URL, SUPABASE_KEY, {
  ...clientOptions,
  global: {
    headers: {
      Authorization: `Bearer ${CUSTOM_ACCESS_TOKEN_JWT}`,
    },
  },
});

Do note that your CUSTOM_ACCESS_TOKEN_JWT also has to be signed with the JWT_SECRET provided in your Supabase dashboard.

When your CUSTOM_ACCESS_TOKEN_JWT expires, you will need to handle refreshing it and re-creating the client above by passing it in again. We currently don't handle refreshing custom tokens on Supabase.

evelant commented 1 year ago

@kangmingtay I already sign my custom token with JWT_SECRET and it works fine per my above comments.

Creating a new client on token refresh poses a problem because I have realtime subscriptions running at the time when the token refresh occurs. How does the client handle that internally on token refresh?

I'd like to avoid having to stop all listeners, create a new client, then re-create all of the listeners and handle the case where they missed messages during that transition. That would be quite a headache to manage just to work around refreshing a token.

evelant commented 1 year ago

@w3b6x9 I just double then triple checked to confirm, and it seems that your post above and the documentation is indeed backwards about the keys, at least during local development.

 realtime: {
            headers: {
                apikey: creds.key, //--supabase anon key
            },
            params: {
                apikey: `Bearer ${key}` //-- my JWT signed with JWT_SECRET
            },
        },

that, as recommended by the docs does not work, it results in the following 401 unauthorized in kong logs

 "GET /realtime/v1/websocket?apikey=Bearer+MY_SIGNED_KEY_HERE&vsn=1.0.0 HTTP/1.1" 401 52 "-" "-"

However if I switch it around and put my signed custom JWT into the headers, everything works perfectly, my clients connect and get realtime updates from postgres.

 realtime: {
            headers: {
               apikey: `Bearer ${key}` //-- my JWT signed with JWT_SECRET
            },
            params: {
                apikey: creds.key,// --supabase anon key                
            },
        },

Also a heads up, realtime is broken in supabase cli 1.48.1. Stick to 1.47.0 for now. https://github.com/supabase/cli/issues/997

Can anybody else confirm and reproduce this?

w3b6x9 commented 1 year ago

Creating a new client on token refresh poses a problem because I have realtime subscriptions running at the time when the token refresh occurs. How does the client handle that internally on token refresh?

I'd like to avoid having to stop all listeners, create a new client, then re-create all of the listeners and handle the case where they missed messages during that transition.

@evelant I recommend creating two Supabase clients where the first client is the one you use for everything except Realtime so you can re-create it whenever you have a refreshed token.

The second client can be used exclusively for Realtime activity. You create the client, pass in the initial custom token, and then whenever you have the refreshed token, just call secondClient.realtime.setAuth('new-token'). This will prevent having to sever Realtime's socket connection.

w3b6x9 commented 1 year ago

I just double then triple checked to confirm, and it seems that your post above and the documentation is indeed backwards about the keys, at least during local development.

@evelant the instructions I provided above and the ones found in the Supabase docs are confirmed for hosted Realtime. CLI local development is a bit different b/c it's still using Kong while hosted version has transitioned over to Cloudflare for the API gateway.

 realtime: {
            headers: {
                apikey: creds.key, //--supabase anon key
            },
            params: {
                apikey: `Bearer ${key}` //-- my JWT signed with JWT_SECRET
            },
        },

that, as recommended by the docs does not work, it results in the following 401 unauthorized in kong logs

 "GET /realtime/v1/websocket?apikey=Bearer+MY_SIGNED_KEY_HERE&vsn=1.0.0 HTTP/1.1" 401 52 "-" "-"

However if I switch it around and put my signed custom JWT into the headers, everything works perfectly, my clients connect and get realtime updates from postgres.

 realtime: {
            headers: {
               apikey: `Bearer ${key}` //-- my JWT signed with JWT_SECRET
            },
            params: {
                apikey: creds.key,// --supabase anon key                
            },
        },

@evelant thanks, will take a look at CLI local development and get back to everyone shortly.

Also a heads up, realtime is broken in supabase cli 1.48.1. Stick to 1.47.0 for now. supabase/cli#997

@evelant you're right, this was an issue but looks like it's now fixed in CLI v1.49.3.

w3b6x9 commented 1 year ago

@evelant I just looked at the way you set it up for local development:

 realtime: {
            headers: {
               apikey: `Bearer ${key}` //-- my JWT signed with JWT_SECRET
            },
            params: {
                apikey: creds.key,// --supabase anon key                
            },
        }

And it works b/c Kong is verifying params.apikey but the problem is that Realtime is forwarded that key as well so if you check your realtime.subscription table the record is the claims from the anon key and not your custom signed token's.

CLI did recently merge in a change where you can pass in the initial custom token when spinning up the CLI. See PR: https://github.com/supabase/cli/pull/947.

For example, you can pass in the custom token for role anon by doing: SUPABASE_AUTH_ANON_KEY=custom-token supabase start

Then you can pass in the refreshed token by calling client.realtime.setAuth(refreshed-token).

evelant commented 1 year ago

@w3b6x9 at least locally realtime is not forwarded the anon key in this case. realtime.subscription correctly contains the claims from my custom token. Something else is still going wrong here I think.

As for the production side it appears that manually changing the headers on the client when I refresh my token works fine. Everything continues to work, except for realtime. After calling setAuth with my updated token realtime no longer recieves updates.

evelant commented 1 year ago

@w3b6x9 Let me clarify exactly what I'm doing and exactly the behavior I'm seeing so hopefully there isn't any confusion.

At a high level, my app does this:

  1. On app start, check for a firebase auth token
  2. If no firebase auth exists create an anonymous user
  3. Send the firebase auth token to my edge function generateToken which verifies the firebase token then signs a new token with the supabase JWT_KEY and the role authenticated
  4. The app takes the newly minted token and creates a client like so
export const setupDbConnection = async (
    localStorageInstance: { getItem: any; setItem: any; removeItem: any },
    firebaseToken: string
) => {
    const key = await getTokenFromFirebaseToken(firebaseToken)

    const authOpts = {
        storage: localStorageInstance,
        autoRefreshToken: false,
        persistSession: false,
        detectSessionInUrl: false,
    }

    const db = createClient<Database>(creds.url, creds.key, {
        auth: authOpts,
        global: {
            headers: {
                Authorization: `Bearer ${key}`,  // custom token with 'authenticated' role signed with jwt secret
            },
        },
        realtime: {
            headers: {
                apikey: `Bearer ${key}`, // custom token with 'authenticated' role signed with jwt secret
            },
            params: {
                apikey: creds.key, // supabase anon key
            },
        },
    })

    db.realtime.setAuth(key)
    return db
}
  1. Now everything works in local dev, including realtime which has the proper 'autheticated' claim entry in realtime.subscription
  2. Periodically (every minute) and upon app returning from the background (it's a react-native app) check for firebase token expiry. If firebase token is expired or refreshed then refresh the custom token by calling generateToken edge fn again
  3. Update the existing supabase client instance with the newly minted token with this code
export const refreshSupabaseTokenFromFirebaseToken = async (
    db: TaskHeroSupabaseClient | null,
    firebaseToken: string
) => {
  if (!db) {
        throw `db not initialized yet`
    }
    const key = await getTokenFromFirebaseToken(firebaseToken)

    //@ts-ignore protected field
    db.headers.Authorization = `Bearer ${key}`
    //@ts-ignore
    db.auth.headers.Authorization = `Bearer ${key}`
    //@ts-ignore
    db.rest.headers.Authorization = `Bearer ${key}`
    db.realtime.setAuth(key)

    if (db.realtime.params) {
        db.realtime.params.apikey = `Bearer ${key}`
    }
    db.functions.setAuth(key)

    return key
}

After doing this everything still works in local development. In production deployment everything works except realtime which no longer receives updates. To summarize the problems:

  1. Docs are backwards, at least for local dev, realtime headers must be custom token and params must be supabase anon key. This setup works 100% correctly for realtime in dev.
  2. In production realtime works with the same configuration as in dev until calling setAuth with a refreshed token.

I hope that clarifies exactly what's happening for me and helps track down the problem. Please ask any questions if you need clarification.

evelant commented 1 year ago

I think I've finally got it working. My example code above had an issue, had to remove

// don't need this, because realtime.params.apikey is always the supabase anon key
// only need to update the _headers_ with the new token on refresh   
// if (db.realtime.params) {
   //     db.realtime.params.apikey = `Bearer ${key}`
   // }

since that param doesn't change on token refresh. It's always the supabase anon token. With this I'm finally finding that realtime stays connected and requests from the client continue to succeed without re-creating the client.

I'm still testing but hopefully I've got it this time. It would be nice if supabase-js could get a built-in updateToken(newToken: string) method to support this pattern (or some other method of more cleanly supporting custom tokens).

kangmingtay commented 1 year ago

hey @evelant, apologies for the late reply as the team was quite busy with launch week! just to clarify, i'm assuming that you'd want updateToken(newToken: string) to take care of setting the new custom token properly across the Supabase services like such:

db.headers.Authorization = `Bearer ${key}`
db.auth.headers.Authorization = `Bearer ${key}`
db.rest.headers.Authorization = `Bearer ${key}`
db.realtime.setAuth(key)
db.functions.setAuth(key)
solsonl commented 1 year ago

@kangmingtay Based on your comment, I made the realtime work with custom jwt. Here is how I handle it:

export const getSupabaseRealtime = (exchangeToken: string) => {
  const intialOptions = {
    realtime: {
      headers: {
        apikey: `Bearer ${exchangeToken}`,
      },
      params: {
        apikey: ConfigService.getPropertyAnonKey(),
      },
    },
  };
  const realtimeClient = createClient(
    ConfigService.getPropertySupabaseUrl(),
    ConfigService.getPropertyAnonKey(),
    intialOptions
  );
  realtimeClient.realtime.setAuth(exchangeToken);
  return realtimeClient;
};
hmnd commented 11 months ago

I think I've followed all of what has worked for @evelant (set realtime.headers.apikey to Bearer ${myjwt} + call db.realtime.setAuth(myjwt)), but I'm still unable to subscribe using an authenticated role with my custom JWT.

I'm getting this strange error response back from the realtime websocket: {:error, :error_generating_signer}

My JWT looks like this:

{
  "role": "authenticated",
  "app_metadata": {
    // some metadata here
  },
  "aud": "authenticated",
  "iat": 1685701508,
  "exp": 1685705108
}

And here's the first message sent to the websocket:

{"topic":"realtime:xxx","event":"phx_join","payload":{"config":{"broadcast":{"ack":false,"self":false},"presence":{"key":""},"postgres_changes":[{"event":"*","schema":"public","table":"mytable"}]},"access_token":"myjwt"},"ref":"1","join_ref":"1"}

My users don't correlate to users in Supabase, so I don't include a sub, but I've tried setting it to a valid user's uuid too.

Anyone got an idea what's going on here? Would appreciate any help! I'm at a loss.

evelant commented 11 months ago

@hmnd Sorry but I'm not sure what's going on there. FYI sub does not need to correspond to a row in auth.users. I set sub in my token to my firebase user id.

I would double check:

  1. You've enabled RLS for the tables you want to listen to
  2. You've got an RLS policy that matches the data you want in your token , ex:
    CREATE POLICY "Enable full access to rows owned by authenticated user" ON "public"."my_table"
            AS PERMISSIVE FOR ALL
            TO authenticated
            USING (auth.jwt() ->> 'sub' = "user_id");
  3. You've enabled replication, the supabase_realtime publication exists, and the tables you want are added to it
  4. You're totally sure you're signing your custom token with your supabase JWT secret
hmnd commented 11 months ago

@evelant wow thanks for the fast reply! That's good to know about sub.

I believe everything is configured correctly, but I'll double check. Maybe I'll try with a regular supabase jwt too to try to narrow down what that error could mean...

hmnd commented 11 months ago

@evelant spent half a day yesterday trying to debug this and just found that I was missing the typ: JWT header :facepalm:. Now it works like a charm!

LivioGama commented 11 months ago

I decided to write a whole medium post for avoiding you waste time in the future: https://liviogamassia.medium.com/using-supabase-rls-with-firebase-auth-custom-auth-provider-357eaad9c70f

chasers commented 10 months ago

Hey all ... thanks for fumbling through the black box here. Especially @LivioGama for the full write up. I needed to figure this out too.

I can confirm, at least for Realtime, the only thing you need to do is call supabase.realtime.setAuth('your-custom-jwt') after you instantiate your Supabase client with your anon jwt and before you subscribe to the Channel. It sounds like you need some headers for other services but for Realtime you don't.

Couple updates:

I'm also trying to get something official up in our docs wrt using your own JWTs across the whole platform. Not exactly sure why we don't have that yet.

hmnd commented 8 months ago

@chasers just curious if you ever got around to documenting this more fully or if that's been put on the roadmap? I'm sure it would be very beneficial for future users trying to stumble through this.

chasers commented 8 months ago

I did! See: https://supabase.com/docs/guides/realtime/postgres-changes#custom-tokens

And you can use the Inspector with a custom JWT here: https://realtime.supabase.com/inspector/new

Gonna close this one now!