Closed zucatti closed 1 year ago
Hello @zucatti are you using the S3 backend?
Hello, no, just the file storage with longhorn volume readwriteonce on à kubernetes infra. Every upload work great, move or copy operation are OK, when I try to read the newly/updated file leads to this error… The file is updated on disk and database... Can't figure out what's happened ;-) Le 12 oct. 2022 à 14:55 +0200, Fabrizio @.***>, a écrit :
Hello @zucatti are you using the S3 backend? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
replying to my own thread ;-) fix the error with this : try { const cacheControl = await this.getMetadata(file, 'user.supabase.cache-control') const contentType = await this.getMetadata(file, 'user.supabase.content-type') } catch(e) { const cacheControl = '' const contentType = '' }
EDIT:
A copy or move operation does not include the xattr user.supabase.content-type (and cache control)
The result is that the copied/moved file does not have the attribute (cache control and content type) resulting in "the extended attribute does not exist" error when you try to download/read it.
The solution is to read the metadata from the original file and setMetaData to the destination file like this :
async copyObject(bucket: string, source: string, destination: string): Promise${bucket}/${source}
)
const destFile = path.resolve(this.filePath, ${bucket}/${destination}
)
const cacheControl = await this.getMetadata(srcFile, 'user.supabase.cache-control')
const contentType = await this.getMetadata(srcFile, 'user.supabase.content-type')
await fs.copyFile(srcFile, destFile)
await Promise.all([
this.setMetadata(destFile, 'user.supabase.content-type', contentType),
this.setMetadata(destFile, 'user.supabase.cache-control', cacheControl),
])
return {
httpStatusCode: 200,
}
}
Hi @zucatti, preserving the metadata on copy and move for the file system backend seems like a bug we can fix. Reopening.
This is now been fixed with the refactoring work
@fenos Hi, I am using self-hosted supabase on a docker-linux setup. I am trying to upload images to supabase-storage but I am getting similar error.
Can you help me with this?
Hi everyone! So during my various tests, migrations and Supabase (self-hosted) upgrades I ran into this problem: the file placed in the Filesystem lost their extended attributes.
Thankfully for us, the Supabase team thought carefully before developing this solution by placing the content of the extended attributes also inside the objects table (of the storage schema) inside the Postgres database.
The two missing attributes are as follows:
Since the file hierarchy is organized in this way:
So the solution is pretty simple: run a script that query the database by matching the folder name containing each files (with storage
->objects
->name
), the bucket (storage
->objects
->bucket_id
) and (to be more careful) the final file name that is the "revision" (storage
->objects
->revision
).
The following script may contain bugs. I am not responsible for any damage or problems caused by doing it. Feel free to take it and use it as you like.
Before running this script verify and follow the following points:
xattr
command (otherwise install it first)bucket_id
== bucket_name
)volumes
one.env
file in the same folder that includes the SUPABASE_PUBLIC_URL
and the SERVICE_ROLE_KEY
variablesnpm install --save-dev typescript ts-node dotenv @supabase/supabase-js
import 'dotenv/config'
import { SupabaseClient, createClient } from '@supabase/supabase-js'
import fs from 'fs/promises'
import path from 'path'
import { exec } from 'child_process';
import util from 'util';
const execAsync = util.promisify(exec);
async function setFileMetadata(filePath: string, key: string, value: string): Promise<void> {
try {
// Convert the key and value to a format suitable for xattr
const formattedKey = `user.${key}`;
// Write metadata using xattr
await execAsync(`xattr -w ${formattedKey} '${value.replace(`'`, `\'`)}' '${filePath.replace(`'`, `\'`)}'`);
} catch (error) {
console.error(`Error setting metadata: ${error}`);
}
}
let processed = 0
let skipped = 0
async function searchFiles(
directory: string,
callback: (
fileName: string,
fullPath: string
) => Promise<void>,
): Promise<void> {
async function traverse(currentDirectory: string): Promise<void> {
const files = await fs.readdir(currentDirectory);
for (const file of files) {
const filePath = path.join(currentDirectory, file);
const stat = await fs.stat(filePath);
if (stat.isDirectory()) {
// If it's a directory, recursively execute the function
await traverse(filePath);
} else {
// If it's a file, execute the callback function
await callback(file, filePath);
}
}
}
// Start the traversal
await traverse(directory);
}
async function searchCallback(
client: SupabaseClient,
file: string,
fullPath: string
): Promise<void> {
// Skip the MacOS `.DS_Store`
if (file === ".DS_Store") {
return;
}
const paths = fullPath.split("/");
const containingFolder = paths[paths.length - 2]
const bucket = paths[4]
const storageObjectData = await getFileObjectDataFromDb(client, file, containingFolder, bucket)
if (storageObjectData == undefined) {
console.debug('skipped')
skipped++
return
}
if (storageObjectData.version !== file) {
console.error('ERROR')
process.exit(1)
}
// console.debug('processed: ' + fullPath)
await setFileMetadata(fullPath, 'supabase.cache-control', storageObjectData.metadata.cacheControl)
await setFileMetadata(fullPath, 'supabase.content-type', storageObjectData.metadata.mimetype)
processed++
}
/** Get file Object data from the database (storage.objects) */
const getFileObjectDataFromDb = async (
client: SupabaseClient,
fileName: string,
folderName: string,
bucketName: string
): Promise<{ version: string; metadata: { eTag: string; size: number; mimetype: string; cacheControl: string; lastModified: string; contentLength: number; httpStatusCode: number }} | undefined> => {
const result = await client
.schema('storage')
.from('objects')
.select('*')
.eq('bucket_id', bucketName)
.eq('name', folderName)
.single()
if (result.error) {
console.debug('file name:' + fileName + ' - folder name:' + folderName + ' - bucket name: ' + bucketName)
console.error(result.error)
return undefined
}
return result.data
}
const mainLopp = async () => {
console.log('start')
// Initialize the Supabase client using the configurations placed inside the `.env` file
const client = createClient(process.env.SUPABASE_PUBLIC_URL || '', process.env.SERVICE_ROLE_KEY || '')
await searchFiles(
'volumes/storage/stub/stub',
async (
file: string,
fullPath: string
) => {
return searchCallback(client, file, fullPath)
}
)
console.log('\n\n')
console.log('processed: ' + processed)
console.log('skipped: ' + skipped)
console.log('end')
}
mainLopp();
Hope that this will help someone 😄🖖🏻!
Ps. At the end the script will report the result
Pps. Sorry for my bad English
I'm having the same issue after copying the volumes/storage
folder recursively to another instance. The assets are being copied but supabase UI doesn't show their previews (500 server error) and I get The extended attribute does not exist
in container error logs. Is there any "more correct" way to copy all storage files to another supabase instance?
@riccardolardi hi!
Maybe by copying the source file you didn’t took care of preserving the file extended attributes.
Here’s a guide to how copy (using cp
or rsync
) files and keep the original source xattr: https://unix.stackexchange.com/a/119980.
Alternatively you can also pack everything by using tar
: https://stackoverflow.com/a/44753270.
Hope this will help you 😄!
Thank you @massimodeluisa
For others running into the same issue: I had to run sudo rsync -Xavz --fake-super src target
which would preserve xattrs. On Debian Buster I had to compile the latest rsync since the apt distributed version does not support --fake-super
flag
Bug report
Describe the bug
Can't access my file after a move action
To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
Expected behavior
Read the file
System information