Closed zaibon closed 5 years ago
Ok so this is actually not a bug but just a limitation of the archive with the current config it has. Let me explain, when uploading a file, we do process it through a pipeline that will split the files and then erasure code it over multiple shards. All of this generates metadata so we can later on reconstruct the file from the metadata information.
What happens here, is we reach a metadata size that is bigger then what 0-db can accept. 0-DB has a limit of around 8MiB of data per write call. So here, when all the file is uploaded, we try to write the metadata to the tlog, but 0-db refuse cause this metadata block is too big. Since the write to the tlog file, minio signal the write of the file as failed too.
An easy way to solve this is to change the configuration of the minio itself. If you need to store bigger file, then you can for example set the BlockSize of the minio configuration to a Higher value. With a bigger BlockSize, minio will generate less metadata and thus you can store bigger files.
I've create a small sheet where you can play with the different configuration to see what fits your needs.
https://docs.google.com/spreadsheets/d/1M8lTpN00yFul4NH2el3gJ0bN-JJC4-o5HT721GWUYV4/edit?usp=sharing On this sheet, you can edit the blue cells and it will compute the size of the metadata generated for the config and file size you specified. If the result is highlighted in orange, that means you config don't support the file size and you need to tweak the config a bit.
see: https://docs.grid.tf/threefold/proj_bancadati/issues/90