supabase / storage

S3 compatible object storage service that stores metadata in Postgres
https://supabase.com/docs/guides/storage
Apache License 2.0
807 stars 115 forks source link

Signed urls for upload #81

Closed etiennedupont closed 1 year ago

etiennedupont commented 2 years ago

Feature request

Is your feature request related to a problem? Please describe.

At Labelflow, we developed a tool to upload images on our Supabase storage, based on one nextJs API route. The goal is for us to abstract the storage method from the client-side by querying a generic upload route to upload any file and to ease the permission management. Indeed, in the server-side function, one service role Supabase client is manipulated to actually make the upload. We use next-auth to secure the route (and to manage authentication in the app in general).

Client-side upload looks like that:

await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "PUT",
                  body: file,
                });

Server-side API route looks more or less like that (I don't show the permission management part):

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.put(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { file } = req;
  const { error } = await client.storage.from(bucket).upload(key, file.buffer, {
    contentType: file.mimetype,
    upsert: false,
    cacheControl: "public, max-age=31536000, immutable",
  });
  if (error) return res .status(404);
  return res.status(200);
});

export default apiRoute;

The problem is that we face a serious limitation in terms of upload size since we use Vercel for deployment which doesn't allow serverless functions to handle requests that are more than 5Mb. Since we send over images in the upload request from the client to the server, we're likely to reach that limit quite often.

Describe the solution you'd like

As we don't want to manipulate Supabase clients client-side, we think that the ideal solution would be to allow us to upload directly to Supabase, using an upload signed URL. The above upload route could now take only a key as an input and return a signed URL to make the upload to.

Client-side upload would now be in two steps:

// Get Supabase signed Url
const { signedUrl } = await (await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "GET",
                })).json();

// Upload the file
await fetch(signedUrl, {
                  method: "PUT",
                  body: file,
                });

And our API route would look like that, more or less:

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.get(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { signedURL } = await client.storage
    .from(bucket)
    .createUploadSignedUrl(key, 3600); // <= this is the missing feature

  if (signedURL) {
    res.setHeader("Content-Type", "application/json");
    return res.status(200).json({signedURL});
  }

  return res.status(404);
});

export default apiRoute;

Describe alternatives you've considered

I described them in our related issue:

Additional context

We're happy to work on developing this feature at Labelflow if you think this is the best option!

riderx commented 2 years ago

I have the same issue for https://capgo.app i allow users to upload from my CLI with a apikey, so not logged in in the CLI. my current solution is to split the file in chuck of 1mb to upload in loop and edit the file in storage but it often fail for big files: https://github.com/Cap-go/capgo-cli/issues/12

fenos commented 2 years ago

Hello! Apologies for the late reply,

I really like the idea of a signed URL for upload, I will add this to the backlog for discovery & prioritization

riderx commented 2 years ago

@fenos thanks for that, for me, i don't need anymore the feature since.

I was able to do APIKEY check with RLS.

If you want to do it too:

First create key_mode, the type of api key:

CREATE TYPE "public"."key_mode" AS ENUM (
    'read',
    'write',
    'all',
    'upload'
);

Then create the table:

CREATE TABLE "public"."apikeys" (
    "id" bigint NOT NULL,
    "created_at" timestamp with time zone DEFAULT "now"(),
    "user_id" "uuid" NOT NULL,
    "key" character varying NOT NULL,
    "mode" "public"."key_mode" NOT NULL,
    "updated_at" timestamp with time zone DEFAULT "now"()
);

Then create the postgress function:

CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
 RETURNS boolean
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
Begin
  RETURN (SELECT EXISTS (SELECT 1
  FROM apikeys
  WHERE key=apikey
  AND mode=ANY(keymode)));
End;  
$function$

Then add the RLS in table you want to give access:

is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])

And in the SDK 1 you can add your APIKEY like that

const supabase = createClient(hostSupa, supaAnon, {
    headers: {
        apikey: apikey,
    }
})

In SDK v2

const supabase = createClient(hostSupa, supaAnon, {
    global: {
      headers: {
          apikey: apikey,
      }
  }
})
kfields commented 2 years ago

That would be very much appreciated. Thank you.

n-glaz commented 2 years ago

+1 for this, signed upload URLs would solve a lot of my own implementation issues around using Supabase storage with NextJS

th-m commented 2 years ago

➕ 💯 This would great

chitalian commented 2 years ago

+1 would really like this

riderx commented 2 years ago

i updated my comment for people who wanted the apikey system as me

413n commented 1 year ago

+1

c3z commented 1 year ago

+1

huntedman commented 1 year ago

+1

yoont4 commented 1 year ago

Is this still prioritized? The DB is setup in a way where we can still use middleware to handle the auth, but that is not the case for storage uploading. If we aren't able to create a signed URL, we have to use RLS to control the upload authorization which doesn't work in all of our cases. This would be extremely useful in allowing us to have some access-control live in middleware for file uploads.

ccssmnn commented 1 year ago

I'm also interested in this feature. I would love to create presigned URLs for uploads to save bandwidth and avoid file size limitations, while using our own server for most of the business logic. It looks like @etiennedupont has fixed their issue by using S3 directly, unfortunately.

c3z commented 1 year ago

I can share my solution, where I deployed proxy server using fly.io to circumvent that issue Hovever not ideal I;m still waiting also for this feat

Eerkz commented 1 year ago

@fenos thanks for that, for me, i don't need anymore the feature since.

I was able to do APIKEY check with RLS.

If you want to do it too:

First create key_mode, the type of api key:

CREATE TYPE "public"."key_mode" AS ENUM (
    'read',
    'write',
    'all',
    'upload'
);

Then create the table:

CREATE TABLE "public"."apikeys" (
    "id" bigint NOT NULL,
    "created_at" timestamp with time zone DEFAULT "now"(),
    "user_id" "uuid" NOT NULL,
    "key" character varying NOT NULL,
    "mode" "public"."key_mode" NOT NULL,
    "updated_at" timestamp with time zone DEFAULT "now"()
);

Then create the postgress function:

CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
 RETURNS boolean
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
Begin
  RETURN (SELECT EXISTS (SELECT 1
  FROM apikeys
  WHERE key=apikey
  AND mode=ANY(keymode)));
End;  
$function$

Then add the RLS in table you want to give access:

is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])

And in the SDK 1 you can add your APIKEY like that

const supabase = createClient(hostSupa, supaAnon, {
    headers: {
        apikey: apikey,
    }
})

In SDK v2

const supabase = createClient(hostSupa, supaAnon, {
    global: {
      headers: {
          apikey: apikey,
      }
  }
})

Anyone else having trouble with the custom headers? Tried logging the request headers and my custom headers are never attached.

softmarshmallow commented 5 months ago

why does createSignedUploadUrl does not have upsert option? and why does uploadToSignedUrl have a upsert option?

how would I be able to create a singed url for client upload for updating existing fikes?