higlass / higlass-docker

Builds a docker container wrapping higlass-server and higlass-client in nginx
MIT License
32 stars 14 forks source link

Speed up performance of local server #116

Open mccalluc opened 7 years ago

mccalluc commented 7 years ago

@nils: Is this the standalone container, or the local with redis? Can you also give a url for the file you're using on the local? (I would like to get the workflow smoothed out so that we can get to a reproducing state more quickly.)

i am investigating why the performance of the local deployment is so poor -> thread

5 replies
nils [3 hours ago] 
this is what i am comparing: http://higlass.io/app/?config=MMoYSyeNQZqCZJu6-OrBJA
higlass.io
HiGlass - App
Fast Contact Matrix Visualization for the Web

nils [3 hours ago] 
i have a pretty similar setup locally (same data, just a lower resolution file of the same data set)

nils [3 hours ago] 
loading the view from the remote server takes 3.10 seconds (about 12.5 MB are transferred)

nils [3 hours ago] 
loading a roughly equivalent view from the local server takes 11.7 seconds (about 11.6 MB are transferred)

nils [3 hours ago] 
i posted the screenshots for the network load in the main channel - key observation: loading from local instance takes forever because a ton of time is spent waiting for responses. time required to load from remote is driven by download times.
mccalluc commented 7 years ago

(I haven't been able to replicate this with 1000kb and 100kb cooler files: Redraws seem to take about 3 seconds consistently. Downloading Rao2014-GM12878-MboI-allreps-filtered.1kb.multires.cool and I'll see if that makes a difference.)

mccalluc commented 7 years ago

I've downloaded the Rao dataset. If I keep scrolling sideways fast enough, the rendering may never catch up. So here's the stripped down output from top when I was vigorously scrolling and zooming: It can use more than 100% of one core:

89759  com.docker.hyper 0.0  
89759  com.docker.hyper 0.9  
89759  com.docker.hyper 1.0  
89759  com.docker.hyper 1.3  
89759  com.docker.hyper 64.7 
89759  com.docker.hyper 101.2
89759  com.docker.hyper 123.1
89759  com.docker.hyper 165.3
89759  com.docker.hyper 3.4  
89759  com.docker.hyper 128.9
89759  com.docker.hyper 72.0 
89759  com.docker.hyper 6.2  
89759  com.docker.hyper 101.9
89759  com.docker.hyper 86.6 
89759  com.docker.hyper 98.1 
89759  com.docker.hyper 84.1 
89759  com.docker.hyper 179.7
89759  com.docker.hyper 93.4 
89759  com.docker.hyper 37.4 
89759  com.docker.hyper 1.3  
89759  com.docker.hyper 0.9  
89759  com.docker.hyper 23.0 
89759  com.docker.hyper 97.0 
89759  com.docker.hyper 1.2  
89759  com.docker.hyper 40.4 
89759  com.docker.hyper 107.0
89759  com.docker.hyper 131.4
89759  com.docker.hyper 166.0
89759  com.docker.hyper 130.3
89759  com.docker.hyper 130.8

My understanding is that the requests are handled in order, so if it gets behind, it is hard for it to catch up. It's interesting in the first batch it spends a lot of time simply trying to connect... after that the requests seem to come in at about the same rate, but it is able to stay caught up.

screen shot 2017-03-07 at 2 25 41 pm