mastodon-sc / mastodon-git

A plugin for Mastodon that allows to collaborate on a project via Git.
BSD 2-Clause "Simplified" License
0 stars 0 forks source link

Git Compatible Mastodon Storage Format #21

Open maarzt opened 1 year ago

maarzt commented 1 year ago

This is a part of: mastodon-sc/mastodon-git#12

Currently Mastodon provides two storage formats. As folder or as *.mastodon. Both file formats might have their problems when used inside a git repository:

  1. When text files are used: Git merge might introduce inconsistencies in the text files, when merge two changes to the same dataset. Conflicting changes, like removing a spot & adding a edge two the spot. Might not appear as conflicting changes.
  2. When binary files are used: Commit many different version of a large binary file might blow up the repository size and and a size limitation in git or GitHub might ruin the attempts to version Mastodon files.

Possible solutions:

Binary files divided into blocks

Introduce a key Mastodon rewrites the spot ids when saving a projects. Non constant spot ids are problematic. A small change in the ModelGraph can easily change a large number of spot ids. This is a problem for efficient storage (delta compression) of multiple versions with git. It is therefor necessary to have a key value that normally doesn't change.

The data in a mastodon project could be expressed in two tables

spot table:
------------
1. spot-key (unique and constant)
2. timepoint
3. x
4. y
5. z
6. label
7. covariance matrix
?? is there more

link table
-----------
1. link-key (unique and constant)
2. source spot-key
3. target spot-key
4. outgoing edge index
5. incoming edge index

Additionally, tables for properties should be created. These cover tagsets and features

property tables:
------------
1. spot-key or link-key (spot vs link can currently be derived from the filename)
2. value (e.g. feature value, tag value as boolean)

These tables could be stored in a simple chunked binary format (easiest solution using DataOutputStream, which would be Java-specific) or as ZARR tables (could potentially be read also using Python scripts). A table should be sorted by the key and be chunked. For example keys 0-1000 are written into the first file, keys 1000-2000 are written into the second file etc.

Using UUID as key value

Possible alternative to UUID use a simple counter (integer or long). Each new spot receives an id from this counter. When spots are deleted this id will never be assigned again.

If spots are deleted, "wholes" are not filled. Spots need subsequently to be removed from link-tables, tag-tables and feature tables.

Text format using UUID (not-preferred)

Each spot and link gets a UUID. The spot table and link tables are stored in text files with rows:

spot table
--------
spot_UUID,timepoint,x,y,z,label,covariance matrix

link table
--------
link_UUID, source_spot_UUID, target_spot_UUID, outgoing_edge_index, incomming_edge_index

The text format has the advantages:

and disadvantages:

It's probably still necessary to divide the text file into chunks in order to save memory when using git.

TODOs:

stefanhahmann commented 10 months ago

Usage of ZARR file format has been checked by @maarzt. There is too little existing libraries in Java to use ZARR for the purpose of this issue.

stefanhahmann commented 10 months ago

Simple counter seems to be the better solution for the implementation. Reasons are mentioned above.

maarzt commented 4 months ago

Compare File Reading Performance UUID vs int32 performance

The results where very counter intuitive. Reading a 128-bit from a file plus lookup in a HashMap<UUID, Object> almost as performant as reading 32-bit int plus lookup in a ArrayList<Object>. So it my experiments result is don't hesitate to use UUID, the read performance can be the same as 32-bit int. But it's very important to use a BufferedInputStream and BufferedOutputStream.