Closed etiennedupont closed 1 year ago
I have the same issue for https://capgo.app i allow users to upload from my CLI with a apikey, so not logged in in the CLI. my current solution is to split the file in chuck of 1mb to upload in loop and edit the file in storage but it often fail for big files: https://github.com/Cap-go/capgo-cli/issues/12
Hello! Apologies for the late reply,
I really like the idea of a signed URL for upload, I will add this to the backlog for discovery & prioritization
@fenos thanks for that, for me, i don't need anymore the feature since.
I was able to do APIKEY check with RLS.
First create key_mode, the type of api key:
CREATE TYPE "public"."key_mode" AS ENUM (
'read',
'write',
'all',
'upload'
);
Then create the table:
CREATE TABLE "public"."apikeys" (
"id" bigint NOT NULL,
"created_at" timestamp with time zone DEFAULT "now"(),
"user_id" "uuid" NOT NULL,
"key" character varying NOT NULL,
"mode" "public"."key_mode" NOT NULL,
"updated_at" timestamp with time zone DEFAULT "now"()
);
Then create the postgress function:
CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
RETURNS boolean
LANGUAGE plpgsql
SECURITY DEFINER
AS $function$
Begin
RETURN (SELECT EXISTS (SELECT 1
FROM apikeys
WHERE key=apikey
AND mode=ANY(keymode)));
End;
$function$
Then add the RLS in table you want to give access:
is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])
And in the SDK 1 you can add your APIKEY like that
const supabase = createClient(hostSupa, supaAnon, {
headers: {
apikey: apikey,
}
})
In SDK v2
const supabase = createClient(hostSupa, supaAnon, {
global: {
headers: {
apikey: apikey,
}
}
})
That would be very much appreciated. Thank you.
+1 for this, signed upload URLs would solve a lot of my own implementation issues around using Supabase storage with NextJS
➕ 💯 This would great
+1 would really like this
i updated my comment for people who wanted the apikey system as me
+1
+1
+1
Is this still prioritized? The DB is setup in a way where we can still use middleware to handle the auth, but that is not the case for storage uploading. If we aren't able to create a signed URL, we have to use RLS to control the upload authorization which doesn't work in all of our cases. This would be extremely useful in allowing us to have some access-control live in middleware for file uploads.
I'm also interested in this feature. I would love to create presigned URLs for uploads to save bandwidth and avoid file size limitations, while using our own server for most of the business logic. It looks like @etiennedupont has fixed their issue by using S3 directly, unfortunately.
I can share my solution, where I deployed proxy server using fly.io to circumvent that issue Hovever not ideal I;m still waiting also for this feat
@fenos thanks for that, for me, i don't need anymore the feature since.
I was able to do APIKEY check with RLS.
If you want to do it too:
First create key_mode, the type of api key:
CREATE TYPE "public"."key_mode" AS ENUM ( 'read', 'write', 'all', 'upload' );
Then create the table:
CREATE TABLE "public"."apikeys" ( "id" bigint NOT NULL, "created_at" timestamp with time zone DEFAULT "now"(), "user_id" "uuid" NOT NULL, "key" character varying NOT NULL, "mode" "public"."key_mode" NOT NULL, "updated_at" timestamp with time zone DEFAULT "now"() );
Then create the postgress function:
CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[]) RETURNS boolean LANGUAGE plpgsql SECURITY DEFINER AS $function$ Begin RETURN (SELECT EXISTS (SELECT 1 FROM apikeys WHERE key=apikey AND mode=ANY(keymode))); End; $function$
Then add the RLS in table you want to give access:
is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])
And in the SDK 1 you can add your APIKEY like that
const supabase = createClient(hostSupa, supaAnon, { headers: { apikey: apikey, } })
In SDK v2
const supabase = createClient(hostSupa, supaAnon, { global: { headers: { apikey: apikey, } } })
Anyone else having trouble with the custom headers? Tried logging the request headers and my custom headers are never attached.
why does createSignedUploadUrl
does not have upsert
option?
and why does uploadToSignedUrl
have a upsert
option?
how would I be able to create a singed url for client upload for updating existing fikes?
Feature request
Is your feature request related to a problem? Please describe.
At Labelflow, we developed a tool to upload images on our Supabase storage, based on one nextJs API route. The goal is for us to abstract the storage method from the client-side by querying a generic upload route to upload any file and to ease the permission management. Indeed, in the server-side function, one service role Supabase client is manipulated to actually make the upload. We use next-auth to secure the route (and to manage authentication in the app in general).
Client-side upload looks like that:
Server-side API route looks more or less like that (I don't show the permission management part):
The problem is that we face a serious limitation in terms of upload size since we use Vercel for deployment which doesn't allow serverless functions to handle requests that are more than 5Mb. Since we send over images in the upload request from the client to the server, we're likely to reach that limit quite often.
Describe the solution you'd like
As we don't want to manipulate Supabase clients client-side, we think that the ideal solution would be to allow us to upload directly to Supabase, using an upload signed URL. The above upload route could now take only a key as an input and return a signed URL to make the upload to.
Client-side upload would now be in two steps:
And our API route would look like that, more or less:
Describe alternatives you've considered
I described them in our related issue:
Additional context
We're happy to work on developing this feature at Labelflow if you think this is the best option!