Closed bombillazo closed 4 months ago
@bombillazo I generally agree that allowing Storing arbritary metadata could be a nice feature to have in Storage. Will add this on the roadmap! ๐
strong agree on adding custom metadata per storge object. Having to maintain a mirror table just for additional data is a pain, especially the lack of usable triggers to keep both sides in sync.
+2 for this. I would definitely like this. would like to tag files that are being uploaded.
+1
+1 for this
Also +1
Would make multi tenant systems a lot easier to manage. Ie. tenants quota. | +1
+1
+1000000000
I'm currently having to add metadata into my file names, so would love to clean up this mess!
+1 for this. I'm currently working with AI generated images and need to store the prediction parameters that generated each image. Otherwise I gotta use AWS S3 cuz I know they have metadata feature.
+1
+1 looking forward to this
+1
+1
+1 looking for same
This would improve my workflow with supabase as well. Right now, I need to keep a separate table and keep it in sync every time i upload, move or delete objects.
+1
I switched from self hosting to supabase, closed down my own s3 bucket to use supabase storage, and now I can't store metadata on the objects... Please please please add this functionality!
+1 seems very helpful!!
+1
+999 would really help out majorly in making supabase storage my go to for all apps. Currently having to use alternatives despite using supabase db and auth
Great request.
@fenos on this topic, since we're not sure when we might get this, is it wise to add a custom trigger to the storage
schema so that when a DELETE event happens on the objects table, we can automatically delete it in our metadata table?
These are the types of things having a dedicated file_metadata
field would prevent so we wouldn't have to sync data and resources across tables.
+1
+1 it could prevent extra table for metadata
@fenos Thank you for your work on this project. Could you please provide an example implementation for this feature? It would be really helpful for understanding how to integrate it properly.
Hi @Aadilhassan thanks for your kind words. I still need to implement this feature in storage-js (the client) once that's done I'll create the official docs.
Will let you know when these are done ๐ very excited to have this feature!
Thanks for the update @fenos. I'm currently building my photography portfolio. I'm assuming I'd be able to store photo JSON data alongside each image uploaded to storage, correct?
If so, I'm a customer now!
Hey @fenos , I'm excited about this feature!
I saw the changes and don't see any way to pass the new user_metadata
field to the POST/PUT Storage API endpoints.
@tusamni Yes correct! ๐
@bombillazo The client-side changes are in the making
@fenos was just looking for this and couldn't be happier to see this is currently being worked on.
Thank you! Patiently waiting.
Hey there. So...any update or documentation on this? I saw a merged PR but I couldn't find a way to have it working. Should I go ahead and create a table and all that it implies to deal with extra data for media?
Thanks.
Yeah... any news on it would be nice. Perhaps we should open a new one for this since this case is closed, which is also kind of weird to close a case while there is no way using this feature.... @fenos
@felipenmoura @logemann https://github.com/supabase/storage-js/pull/207 this needs to be merged before you can use it with the client
@felipenmoura @logemann supabase/storage-js#207 this needs to be merged before you can use it with the client
Awesome, sounds great
Thanks.
+1 Waiting for this.
+1 waiting for this
++ excited for this!
+1 waiting for this feature
+1 waiting for this
I am confused. I followed the merge-chain and this feature seems to be fully merged and released. I updated my @supabase/supabase-js package so that it includes @supabase/storage-js:2.7.0. However the type FileOptions do not include a metadata field
export interface FileOptions {
/**
* The number of seconds the asset is cached in the browser and in the Supabase CDN. This is set in the `Cache-Control: max-age=<seconds>` header. Defaults to 3600 seconds.
*/
cacheControl?: string
/**
* the `Content-Type` header value. Should be specified if using a `fileBody` that is neither `Blob` nor `File` nor `FormData`, otherwise will default to `text/plain;charset=UTF-8`.
*/
contentType?: string
/**
* When upsert is set to true, the file is overwritten if it exists. When set to false, an error is thrown if the object already exists. Defaults to false.
*/
upsert?: boolean
/**
* The duplex option is a string parameter that enables or disables duplex streaming, allowing for both reading and writing data in the same stream. It can be passed as an option to the fetch() method.
*/
duplex?: string
}
Similarly, neither .info() nor .exists() exist on supabase.storage.fom(bucket)
.
Could you update the documentation or bring me on the right track @fenos ?
@tillka you should be updating storage-js to @supabase/storage-js:2.7.1
Do I get this right that metadata does not yet work with supabase.storage? I have to use the dedicated StorageClient?
I would like to get my stored metadata through supabase.storage.list() rather than iterating over it and calling storageClient.info() for each file.
It would be really nice if supabase.storage.list()
would return user_metadata
along with the other file properties! We use the custom metadata field to store the original file names..
In the meantime the solution I'm using is to send a request to the backend where I query the storage.objects
table directly:
const { data, error } = await supabase
.schema("storage")
.from("objects")
.select("*")
.eq("bucket_id", bucketId)
.eq("owner_id", userId)
.ilike("user_metadata->>originalFileName", `%${search}%`);
It would be really nice if
supabase.storage.list()
would returnuser_metadata
along with the other file properties! We use the custom metadata field to store the original file names..In the meantime the solution I'm using is to send a request to the backend where I query the
storage.objects
table directly:const { data, error } = await supabase .schema("storage") .from("objects") .select("*") .eq("bucket_id", bucketId) .eq("owner_id", userId) .ilike("user_metadata->>originalFileName", `%${search}%`);
How are you doing this? I keep getting an error:
{
code: 'PGRST106',
details: null,
hint: null,
message: 'The schema must be one of the following: pg_pgrst_no_exposed_schemas'
}
You need to expose the storage schema from Settings > API > Exposed Schemas
Feature Request
Currently, the
objects
schema in the Storage API schema is a light abstraction over the S3 storage. There is a metadata column in theobjects
table, but it corresponds to the metadata of the S3 file and cannot be used by the user. The API does not expose any way to add related metadata to a file to the objects directly. This metadata would ideally be in JSON format to contain user-determined data. Currently, the user needs to get around this limitation by either:storage.objects
table. This dramatically complicates querying, migrations, and client usage for something that could be colocated within the same table.storage
schema to add this new functionality, potentially breaking compatibility with Supabase-controlled services and implementation.The storage API should natively support metadata, and the API / JS clients should be updated to enable this powerful and common feature for file storage management. My proposal is to add a new column called
file_metadata
to differentiate it to the currentmetadata
field.