Open skyzh opened 2 months ago
This is only a problem with metadata keys, but if metadata gets too large (i.e., >= 4GB), basebackup will consume a lot of memory. Therefore, this is not a priority to fix it now.
turned out also be a good idea for image layer compression b/c we don't know the size of each layer in advance (cc @arpad-m)
Currently, the partition algorithm ensures a single create_image_layers job is not too large, and each time it only creates a single image layer, regardless of the size. This does not matter for relational data, but needs to be fixed for metadata keys.