-
val x = sc.loadGenotypes("s3a://1000genomes/phase1/analysis_results/integrated_call_sets/ALL.chr17.integrated_phase1_v3.20101123.snps_indels_svs.genotypes.vcf.gz")
generates error Unable to execute…
-
We've seen at least 1 non-deterministically occurring instance of ConcurrentModificationException while running the `ReadsPipelineSparkIntegrationTest.testReadsPipelineSpark[5]`
It seems like there…
-
## Bug Report
### Affected tool(s) or class(es)
ReadsPipelineSpark (HaplotypeCallerSpark) when running over a spark cluster
### Affected version(s)
- [x] Latest public release version [GATK v4…
-
In #48, we change our plugin to git versioning to `com.palantir.git-version`; nevertheless, this does not handle the tagging system, nor help with the release proces (thus, it requires some scripting)…
-
When we setup hadoop hdfs, we always neet to setup "hdfs-site.xml" and "core-site.xml". So I think we can set the location xml file in a configure file, and read the files automatically.
In that cas…
-
I am trying to follow the documentation to allow ADAM to read a BAM file from S3.
According to https://adam.readthedocs.io/en/latest/deploying/aws/#input-and-output-data-on-hdfs-and-s3 I should run…
-
I would like to create an HDFS file with custom "dfs.blocksize". Is there any way to override the default block-size for a file before it is written? Thank you very much in advance.
My first try is…
-
I am currently working on a search engine that is throughput orientated and works entirely in apache-spark.
As part of this, I need a directory implementation that can operate on HDFS directly. This …
-
Work with project maintainers and stakeholders to create a roadmap document for the next generation of the HTSJDK file-parsing library, and obtain buy-in from all stakeholders.
From our point of vi…
-
Hi,
Our spark installation use a mapr filesystem ( hdfs compatible ).
GATK spark tools does not seems to recognize it.
When running the following command:
> /home/axverdier/Tools/GATK4/gatk-4.beta…