Closed Joseph-Vineland closed 2 years ago
That is interesting on a number of reasons
first that increasing chunkSizeLimit or fetchSizeLimit causes a new error. I can't really think why that would happen off the top of my head.
Also interesting that it is different on different computers.
The HTTP undefined has been seen to come from that mime type thing (see https://github.com/GMOD/jbrowse/issues/1512) but it is less common on nginx
If you have a public instance I could see if there is anything I could see?
Two debugging exercises you can try locally if you have some interest is also
1) Check the chrome devtools, and see what is printed out in the console
2) Check that your server is NOT returning the header Content-Encoding: gzip
on respones. If it is returning this header, then you would want to make attempts to make it not do so, jbrowse doesn't like this this header as it is telling Chrome (the browser code) to unzip while jbrowse generally unzips it using its own js code
Thanks for the reply cmdcolin. Unfortunately, our browsers are not public, and my supervisor wants to keep it that way. Here is what the devtools console says:
1.bundle.js:1 Error: Too many BAM features. BAM chunk size 20823692 bytes exceeds chunkSizeLimit of 20000000
at e.
This same message is printed several times.
@Joseph-Vineland what about when chunkSizeLimit was increased?
Thanks cmdcolin. I discovered the my old laptop can also fail to display the plots if I navigate to a completely different part of the genome.
When I increase "chunkSizeLimit" and "fetchSizeLimit" the browser freezes/crashes when trying to load the SNPCoverage plot. Devtools console gets disconnected, nothing is printed to it. I'm not sure how to solve the problem.
it's hard to recommend without knowing the characteristics of your data but here are some options
convert to CRAM, CRAM often is faster and smaller than BAM files
consider downsampling your data https://www.biostars.org/p/76791/
consider using a CSI index if continuing to use BAM, it has been helpful in deep COVID sequencing data, see this thread https://twitter.com/cmdcolin/status/1278205413557755906
you can also consider preparing your data for usage in https://github.com/cmdcolin/mpileupplugin (this was also made in collaboration with users looking at deep COVID sequencing data, it's a modified storeClass that uses precomputed SNPCoverage data instead of parsing BAM/CRAM directly)
couple workarounds reported, let me know if there is anything else we can do...maybe can close for now
Hello, I set up some 'SNPCoverage' plots using trackList.json . Example: { "storeClass" : "JBrowse/Store/SeqFeature/BAM", "urlTemplate" : "./files/sampleName_markedDuplicates.bam", "category" : "population / Coverage And SNPs", "metadata.Description" : "Coverage And SNPs Plot", "type" : "JBrowse/View/Track/SNPCoverage", "key" : "sampleName", "label" : "sampleName_coverage" }
The plots display perfectly fine in the web broswer (Chrome 93.0.4577.63 ) when I use my old laptop. However, when I use my new laptop (browser: Chrome or Edge), I get "Error: Too many BAM features. BAM chunk size N bytes exceeds chunkSizeLimit of 20000000.
I tried increasing the chunkSizeLimit and then the fetchSizeLimit. That causes a new error: "Error HTPP undefined when fetching bytes ..." and the problem isn't solved.
I tried editing '/etc/nginx/mime.types' to include the line: application/octet-stream bam bami bai;
But that did not fix the problem either. I think it's a bug. It is very odd how it works perfectly fine on my old laptop but not at all on my new laptop.