GMOD / Apollo

Genome annotation editor with a Java Server backend and a Javascript client that runs in a web browser as a JBrowse plugin.
http://genomearchitect.readthedocs.io/
Other
126 stars 85 forks source link

evaluate projection plugin #755

Closed nathandunn closed 8 years ago

nathandunn commented 8 years ago

Evaluate use of a "projection" plugin. This would go in the client / apollo directory.

The possible advantages to this are: 1) could be used by other JBrowse applications, maybe 2) may allow better integrations on the client 3) might allow for a performance improvement, but I think that the savings may be minimal.

I think that 2) might be more of the driving force here.

nathandunn commented 8 years ago

For moving everything up to the top Falcor + webpack were also possibilities.

Its sort of similar to *ouchDB: https://netflix.github.io/falcor/starter/what-is-falcor.html

The more difficult part of this is that things like upstream sequence alterations / isoforms may cause race conditions and/or transaction-level issues. (atomic reads + writes with consistent threads)

nathandunn commented 8 years ago

webpack is similar to grails assets. Also pileup does some of the stuff we were thinking of:

http://www.hammerlab.org/2015/06/19/introducing-pileup-js-a-browser-based-genome-viewer/

Like dalliance, they are using modern infrastructure . . funnily enough they are looking at moving from SVG to WebGL or Canvas.

This WebGL example is interesting as it handles drag and drop as objects:

https://www.script-tutorials.com/demos/467/index.html . . https://www.script-tutorials.com/webgl-with-three-js-lesson-10/

and it is well supported.

A couple of relevant stack overflow: http://stackoverflow.com/questions/5882716/html5-canvas-vs-svg-vs-div http://stackoverflow.com/questions/11639211/need-to-speed-up-my-svg-could-i-convert-to-webgl-or-canvas

I think that using SVG is going to be far superior over the long haul, at least for the "writeable" tracks.

cmdcolin commented 8 years ago

I would agree that considering new client side architectures could be fruitful. Perhaps we can even try and "break out" of the genome browser and do some things similar to what I shared at the last meeting (forget if I shared, link here https://gist.github.com/cmdcolin/c7bc2155ea74464ecc46)

The basic idea of that code is that we can make a simple engine to control the "blocks" ourselves instead of using the GenomeView block based renderer (which of course assumes the linear projections)

nathandunn commented 8 years ago

I was thinking a bit more drastic, since every exposed element should work the same as before, but also handle arbitrary projections.

WRT to the static images in your gist, it would be great if you could expose a URL (similar GBrowse I think) that could create those static images. It wouldn't help Apollo, per se, but I can see how it would be useful to other projects.

nathandunn commented 8 years ago

I think that the problem WRT to #715 is that the really big advantage from moving to the front-part is that the interface is direct (minor) and calculations are all done in the client (minor) and that model-view calculations only are done once (major, i.e., when you drag stuff up / interact, the original model is being interacted with only and you don't have back-calculate the view).

cmdcolin commented 8 years ago

UCSC seems to have added this functionality recently http://genome.ucsc.edu/goldenPath/newsarch.html#030816

nathandunn commented 8 years ago

Note that while it is great work, they are doing less than we've already done and demonstrated we could do.

They use a single set reference reference track and we could use any set of arbitrary feature tracks, cutting either exons or transcripts, which @deepakunni3 demonstrated.

Additionally, the problem that is much harder to doing this while projecting individual regions in the same space while doing folding and reverse-complement at the same time.

They also have a different grid display system than we are going for, which is currently problematic for us.

To be fair, a lot of these short-cuts are probably a good idea in order to build something to release on and still have a useable product. That being said, any solution that doesn't involve viewing adjacent genes from scaffolds in a different region isn't really worth pursuing.

cmdcolin commented 8 years ago

I demonstrated on https://github.com/cmdcolin/projectionplugin that it is actually appropriate to break the transformations into 3 steps. This allows things like folding, reverse complement, and combining scaffolds to be done modularly.

nathandunn commented 8 years ago

We have multiple ways of projecting these individual steps . . both on the client and the server. I’m not particularly worried about that.

Our current problem is that our visualizations for doing these are currently a long way off and not well-supported within the current architecture.

On Mar 9, 2016, at 9:39 AM, Colin Diesh notifications@github.com wrote:

Ok, well I'm just mentioning it. Also, I disagree with something being labeled "really hard"

In what way is it really hard

It should just be a composition of several transformations

I demonstrated on https://github.com/cmdcolin/projectionplugin https://github.com/cmdcolin/projectionplugin that it is actually appropriate to break the transformations into 3 steps. This allows things like folding, reverse complement. I don't see what causes something to be really hard about combining them.

Also, I didn't really get comments on my projectionplugin As I said I'm happy to contribute that code, but I didn't see any comments on it

— Reply to this email directly or view it on GitHub https://github.com/GMOD/Apollo/issues/755#issuecomment-194417431.

cmdcolin commented 8 years ago

Ok, well I appreciate it, but I think the UCSC the is good example implementation. And they do support "multi-scaffold" but they call it in their case haplotype mode

See http://genome.ucsc.edu/goldenPath/help/multiRegionHelp.html

Example 5. Haplotype

This type of viz (plugging in the alt loci) is quite interesting because it is relevant even with well done genome assemblies, not just fragmented ones

nathandunn commented 8 years ago

That is an interesting use-case. Its slightly different than what we are wanting to do (and they only seem to do a single projection at a time), but something we should definitely be able to support . . even now.