Open darthShadow opened 1 year ago
would it be possible to eventually add some kind of data deduplication
@darthShadow LiteFS does physical replication which means that it deals with page-level data. It doesn't really have a way to check the row-level data and deduplicate.
Ideally, I wouldn't need multiple DBs at all in such a case but write throughput becomes a problem in that case with one user adversely effecting another.
Separate databases should be mostly independent in terms of write throughput. The biggest write perf issue with LiteFS is that it runs through FUSE which will have higher latency than a regular file system, however, it should handle parallel databases well since the FUSE calls are not serialized.
parallel databases well since the FUSE calls are not serialized
Yeah, that's working fine. I was just heading off the question about combining the databases by providing my reason for the same 😅
it deals with page-level data
The multiple databases have a few tables that are completely identical. My understanding is no, but would the pages be identical in such a case and would litefs be capable of deduping (if such a feature existed) those?
The multiple databases have a few tables that are completely identical. My understanding is no, but would the pages be identical in such a case and would litefs be capable of deduping (if such a feature existed) those?
The leaf pages of the b-tree may be identical but it would require that the rows are inserted/updated/deleted in the exact same order every time. The interior (branch) pages point to page numbers so those would all be different.
Another issue I just thought of is that the LTX files that store transaction data are independent of each other so linking data across multiple files in different databases would add a lot of complexity.
The leaf pages of the b-tree may be identical but it would require that the rows are inserted/updated/deleted in the exact same order every time.
Yep, that's the same for the common tables from the initial creation of the user DB.
Every few months or so, I extract the user-specific data and apply it to a base DB where the base DB is generally 20+ GB with the user-specific data only adding a few GB (in extreme cases, most are in 100s of MBs). Considering this, having multiple copies of the 20+ GB base seems wasteful if I can keep only the user-specific data separate.
Explaining in terms of a VCS, the individual DBs are feature branches taken from the master which are frequently rebased from the master so having dedupe of the master across the various DBs will have a significant size reduction.
Out of order merges off the CUD events is sometimes possible using previous and new context.
you have the old row and the new row and you can possible use that to determine a diff and patch merge algo.
seen this done with out of order offline first designs.
Just figured it’s worth throwing it out there.
if FKs are involved through it’s a bridge too far … not sure if they are at the WAL page level ?
you have the old row and the new row and you can possible use that to determine a diff and patch merge algo.
You can make a diff from a page, however, there's a lot of edge cases to worry about. Sometimes pages split in two, sometimes they merge together, etc. You can't simply take the old and new version of the page and diff but you actually need to take the new page, determine its table membership (if a ptrmap
exists), and then traverse to the pages before and after the updated page to determine the set of logical changes. It's really quite a pain in the ass. :)
if FKs are involved through it’s a bridge too far … not sure if they are at the WAL page level ?
IIRC FKs are simply validated at the time of the transaction and don't have auxiliary data with them.
Right. It’s all abstracted. Def not a simple contextual diff / path it seems. I reckon your the only human close enough to it to fathom it
Following up on https://github.com/superfly/litefs/issues/105, would it be possible to eventually add some kind of data deduplication if a lot of the tables are same across multiple DBs with only user-specific tables being different?
Ideally, I wouldn't need multiple DBs at all in such a case but write throughput becomes a problem in that case with one user adversely effecting another.