GMOD / jbrowse-components

Source code for JBrowse 2, a modern React-based genome browser
https://jbrowse.org/jb2
Apache License 2.0
205 stars 61 forks source link

Improve scalability of synteny datasets #2788

Closed cmdcolin closed 6 months ago

cmdcolin commented 2 years ago

Currently synteny datasets are loaded into memory entirely. The Dot tool (https://dot.sandbox.bio/?coords=https://storage.googleapis.com/sandbox.bio/dot/gorilla_to_GRCh38.coords&index=https://storage.googleapis.com/sandbox.bio/dot/gorilla_to_GRCh38.coords.idx&annotations=https://storage.googleapis.com/sandbox.bio/dot/gencode.v27.genes.bed) has a method to filter out unique and repetitive elements via a pre-processing script. We could consider loading this data format. Other tools like D-GENIES also filter out small alignments by default

cmdcolin commented 2 years ago

Possible motivating example: gorilla vs hg38

https://hgdownload.cse.ucsc.edu/goldenpath/hg38/vsGorGor3/

Chain file is 482MB which unzips to almost 2GB, which is pretty much too large for interactive use

The Dot tool has the button for "Load unique" and "Load repetitive" buttons I think to avoid loading the entire thing at once, which could be one strategy (requires preprocessing of the .delta file from mummer with their python script)

cmdcolin commented 1 year ago

porting some comments from #3444

Scalability brainstorming

The hs1 vs mm39 liftover.chain.gz file that we use for the SyntenyTrack is 69MB of gzip data, which is 219MB ungzipped. The maximum size of ungzipped data we support is 512MB due to that being the maximum size of strings in chrome, so it comes pretty close to our limits. I can certainly imagine species (plant genomes, etc) that would exceed our limits.

An indexed file format could help us in some cases. We have not thus far focused on indexed file formats, because we were using somewhat small PAF files that could be loaded into memory but scalability concerns are referenced here https://github.com/GMOD/jbrowse-components/issues/2788. But, with indexing, we may not need to download the entire file when accessing a local region on the LGV synteny track (currently, synteny track adapters generally download the entirety of the file. this is an adapter behavior that could be adjusted for)

The bigChain format from UCSC could possibly help as an example of an indexed file format, it is only indexed in "one dimension" e.g. for the query genome and not the target genome, so accessing the data from the target genome would be unindexed. A custom tabix-y style chain format can probably be made also, similar to mafviewer. "2-D" indexed formats would be cool, but may not be available. Also, "biologically", it may be better to have two tracks: "hs1 (query) vs mm39 (target)" and "mm39 (query) vs hs1 (target)", which would mean the 1D indexing is fine.

cmdcolin commented 1 year ago

here is an example of a human vs mouse dataset. it currently stretches the limits of our scalability, and crashes some browsers. http://jbrowse.org/code/jb2/main/?config=test_data%2Fhuman_vs_mouse.json&session=share-cJ_p38pUpD&password=z4P70

has a 68MB gzipped liftover file as datasource, that uncompresses to 219MB ungzipped data

cmdcolin commented 1 year ago

indexed db bidirectional tabix multi-scale resolution cli helper, or maybe in browser helper

cmdcolin commented 1 year ago

server side helper: server could automatically optimize bidirectional indexes, multi-scale resolution, and potentially extend to graph type genome more readily

cmdcolin commented 6 months ago

somewhat solved by PIF. can revisit in further use cases for large zoom outs