Open sbesson opened 1 month ago
Nice write up, @sbesson. I don't have any immediate ideas. I'd be interested how much faster a dirty change to saveAndReturnIds
is (though it's likely a breaking change) as well as how close the graph is compared to your RAM. After that, it'd be in to profiling AFAIK.
The recent work of https://github.com/ome/omero-blitz/issues/73 revealed another scalability issue when importing large filesets into OMERO.server.
Synthetic datasets of varying number of files can easily be created using a combination of Bio-Formats test images and pattern files:
Each of this dataset can then be imported using the OMERO command-line interface. In that case, the import was done using in-place transfer, skipping the min/max calculation:
The import time for a given fileset can then be queried using
omero fs importtime Fileset:<id>
. The output of this command produces a breakdown by phase (upload, metadata, ...). A more detailed analysis can then be obtained using the import logs stored under the managed repository.The import command above was executed for synthetic filesets of growing sizes (10, 100, 1000, 10000 and 50000 files) using OMERO.server 5.6.11 and an initially empty database. The following table reports the import metrics in the upload phase as well as a breakdown by a few sub-steps in this phase:
In principle, the number of transfer operations, objects to create should scale linearly with the number of files in the fileset. Thus we would reasonably expect the execution time to scale linearly with the number of files.
The last column of the table shows some non-linear behavior and corresponds to the issue described in https://github.com/ome/omero-blitz/issues/73. This should hopefully be addressed in OMERO.server 5.6.12 and help with one of the bottlenecks associated with importing large filesets.
Unlike the creation of
OriginalFile
inRepositoryDaoImpl.createOriginalFile
which execution scales linearly with the number of files in the fileset, the creation of the Fileset inRepositoryDaoImpl.saveFileset
increases non-linearly. For a typical fileset of 50K files, ~50% of the total upload time is spent in this operation and more precisely 2068s (95% of thesaveFileset
time) happen during the single operation saving the objects to the database in https://github.com/ome/omero-blitz/blob/4c46e156bebff40c8618bacfc16bd2b605d33cfd/src/main/java/ome/services/blitz/repo/RepositoryDaoImpl.java#L555-L560/cc @chris-allan @joshmoore @jburel @pwalczysko