Open ahwanpandey opened 3 years ago
Tried with glpk and same error:
$ python -m hatchet run hatchet.ini ; exit $?
[2021-Aug-07 08:48:57]# Parsing and checking input arguments
[2021-Aug-07 08:48:58]
normal: ('data/normal.bam', 'Normal')
samples: {('data/bulk_Noneclone1_09clone0_01normal.sorted.bam', 'TumorSample3'), ('data/bulk_03clone1_06clone0_01normal.sorted.bam', 'TumorSample1'), ('data/bulk_08clone1_Noneclone0_02normal.sorted.bam', 'TumorSample2')}
chromosomes: ['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8', 'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15', 'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22']
samtools: samtools
regions: None
size: 50000
reference: data/hg19.fa
refdict: data/hg19.dict
j: 40
q: 11
outputNormal: output/rdr/normal.1bed
outputTumors: output/rdr/tumor.1bed
outputTotal: output/rdr/total.tsv
verbose: False
[2021-Aug-07 08:48:58]# Retrieving genomic regions to consider from maximum chromosome length
[2021-Aug-07 08:48:58]# Binning and counting the normal sample
Progress: |████████████████████████████████████████| 100.0% Complete [[2021-Aug-07 08:49:15]Binner-21 ends on Normal for chr22)]
[2021-Aug-07 08:49:15]# Writing the read counts for bins of normal sample
[2021-Aug-07 08:49:15]# Binning and counting the tumor samples
Progress: |████████████████████████████████████████| 100.0% Complete [[2021-Aug-07 08:49:30]Binner-38 ends on TumorSample2 for chr22)]
[2021-Aug-07 08:49:30]# Writing the read counts for bins of tumor samples
[2021-Aug-07 08:49:30]# Counting total number of reads for normal and tumor samples
Progress: |████████████████████████████████████████| 100.0% Complete [[2021-Aug-07 08:49:40]TotalCounter-86 ends on Normal for chr22]
[2021-Aug-07 08:49:40]# Writing the total read counts for all samples in output/rdr/total.tsv
[2021-Aug-07 08:49:40]# Parsing the input arguments, checking the consistency of given files, and extracting required information
[2021-Aug-07 08:49:56]
normal: ('/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/data/normal.bam', 'Normal')
chromosomes: ['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8', 'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15', 'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22']
samtools: samtools
bcftools: bcftools
snps: https://ftp.ncbi.nih.gov/snp/organisms/human_9606_b151_GRCh37p13/VCF/GATK/00-All.vcf.gz
reference: data/hg19.fa
j: 40
q: 0
Q: 11
E: False
mincov: 8
maxcov: 300
outputsnps: /researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps
verbose: False
[2021-Aug-07 08:49:56]# Inferring SNPs from the normal sample
Progress: |████████████████████████████████████████| 100.0% Complete [[2021-Aug-07 10:11:33]Caller-104 ends on Normal for chr2)]
[2021-Aug-07 10:11:33]# Counting number of identified SNPs
[2021-Aug-07 10:11:34]277170 SNPs have been identified in total
[2021-Aug-07 10:11:34]# SNP Calling is concluded
[2021-Aug-07 10:11:34]## Called SNPs have been written per chromosome in:
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr1.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr10.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr11.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr12.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr13.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr14.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr15.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr16.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr17.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr18.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr19.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr2.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr20.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr21.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr22.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr3.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr4.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr5.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr6.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr7.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr8.vcf.gz
/researchers/ahwan.pandey/Projects/WholeGenome/Project_DG/tests/hatchet/output/snps/chr9.vcf.gz
[2021-Aug-07 10:11:34]# Parsing the input arguments, checking the consistency of given files, and extracting required information
[2021-Aug-07 10:11:45]
normal: ('data/normal.bam', 'Normal')
samples: {('data/bulk_Noneclone1_09clone0_01normal.sorted.bam', 'TumorSample3'), ('data/bulk_03clone1_06clone0_01normal.sorted.bam', 'TumorSample1'), ('data/bulk_08clone1_Noneclone0_02normal.sorted.bam', 'TumorSample2')}
chromosomes: ['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8', 'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15', 'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22']
samtools: samtools
bcftools: bcftools
snps: {'chr1': 'output/snps/chr1.vcf.gz', 'chr2': 'output/snps/chr2.vcf.gz', 'chr3': 'output/snps/chr3.vcf.gz', 'chr4': 'output/snps/chr4.vcf.gz', 'chr5': 'output/snps/chr5.vcf.gz', 'chr6': 'output/snps/chr6.vcf.gz', 'chr7': 'output/snps/chr7.vcf.gz', 'chr8': 'output/snps/chr8.vcf.gz', 'chr9': 'output/snps/chr9.vcf.gz', 'chr10': 'output/snps/chr10.vcf.gz', 'chr11': 'output/snps/chr11.vcf.gz', 'chr12': 'output/snps/chr12.vcf.gz', 'chr13': 'output/snps/chr13.vcf.gz', 'chr14': 'output/snps/chr14.vcf.gz', 'chr15': 'output/snps/chr15.vcf.gz', 'chr16': 'output/snps/chr16.vcf.gz', 'chr17': 'output/snps/chr17.vcf.gz', 'chr18': 'output/snps/chr18.vcf.gz', 'chr19': 'output/snps/chr19.vcf.gz', 'chr20': 'output/snps/chr20.vcf.gz', 'chr21': 'output/snps/chr21.vcf.gz', 'chr22': 'output/snps/chr22.vcf.gz'}
regions: None
reference: data/hg19.fa
j: 40
q: 0
Q: 11
qual: 11
E: False
gamma: 0.05
maxshift: 0.5
mincov: 8
maxcov: 300
outputNormal: output/baf/normal.1bed
outputTumors: output/baf/tumor.1bed
outputSnps: ./
verbose: False
[2021-Aug-07 10:11:45]# Counting SNPs alleles from the matched-normal sample
Progress: |████████████████████████████████████████| 100.0% Complete [[2021-Aug-07 10:13:06]AlleleCounter-146 ends on Normal for chr22]
[2021-Aug-07 10:13:07]# Selecting heterozygous SNPs
[2021-Aug-07 10:13:37]# Writing the list of selected SNPs, covered and heterozygous in the normal sample
[2021-Aug-07 10:13:37]# Writing the allele counts of the normal sample for selected SNPs
[2021-Aug-07 10:13:37]# Counting SNPs alleles from tumour samples
Progress: |████████████████████████████████████████| 100.0% Complete [[2021-Aug-07 10:16:05]AlleleCounter-184 ends on TumorSample2 for chr22]
[2021-Aug-07 10:16:11]# Writing the allele counts of tumor samples for selected SNPs
[2021-Aug-07 10:16:12]# Parsing and checking input arguments
[2021-Aug-07 10:16:12]
normalbins: output/rdr/normal.1bed
tumorbins: output/rdr/tumor.1bed
tumorbafs: output/baf/tumor.1bed
phase: None
block: 50000
diploidbaf: 0.1
totalcounts: output/rdr/total.tsv
gamma: 0.05
seed: 0
verbose: False
disable: False
[2021-Aug-07 10:16:12]# Reading and checking the bin count files for computing read-depth ratios
[2021-Aug-07 10:16:13]# Reading and checking the allele count files for computing BAF
[2021-Aug-07 10:16:15]# Reading and checking the total read count files
[2021-Aug-07 10:16:15]# Combine the bin and allele counts to obtain BAF and RD for each bin
Progress: |████████████████████████████████████████| 100.0% Complete [Combine bin (51300000, 51304566) in chromosome chr22]
[2021-Aug-07 10:16:26]# Parsing and checking input arguments
[2021-Aug-07 10:16:26]# Reading the combined BB file
[2021-Aug-07 10:16:26]# Format data to cluster
[2021-Aug-07 10:16:26]# Clustering bins by RD and BAF across tumor samples
[2021-Aug-07 10:16:30]## Clustering with K=50 and c=0.02...
[2021-Aug-07 10:16:34]# Writing BBC output with resulting clusters
[2021-Aug-07 10:16:34]# Segmenting bins
[2021-Aug-07 10:16:34]# Determining the largest cluster as diploid or tetraploid and rescaling all the clusters inside the threshold accordingly
[2021-Aug-07 10:16:34]# Writing REF output with resulting segments
# Parsing and checking input arguments
## input: output/bbc/bulk.bbc
## command: None
## segfile: None
## ct: None
## st: None
## cmap: tab20
## resolution: None
## xmin: None
## xmax: None
## ymin: None
## ymax: None
## x: output/plots
## figsize: None
## markersize: 0
## colwrap: 2
## fontscale: 1
## pdf: False
## dpi: 300
# Reading input BBC file
# Bin's clusters are selected accordingly to the provided thresholds
## Selected clusters:
7: SIZE= 1.0, # CHRS= {'chr22'}
26: SIZE= 1.0, # CHRS= {'chr22'}
34: SIZE= 1.0, # CHRS= {'chr22'}
1: SIZE= 117.0, # CHRS= {'chr22'}
9: SIZE= 1.0, # CHRS= {'chr22'}
39: SIZE= 1.0, # CHRS= {'chr22'}
14: SIZE= 136.0, # CHRS= {'chr22'}
28: SIZE= 8.0, # CHRS= {'chr22'}
5: SIZE= 6.0, # CHRS= {'chr22'}
11: SIZE= 1.0, # CHRS= {'chr22'}
17: SIZE= 1.0, # CHRS= {'chr22'}
4: SIZE= 2.0, # CHRS= {'chr22'}
30: SIZE= 89.0, # CHRS= {'chr22'}
10: SIZE= 1.0, # CHRS= {'chr22'}
12: SIZE= 335.0, # CHRS= {'chr22'}
# [RD] Plotting read-depth ratio (RDR) for all samples in output/plots/readdepthratio.pdf
## Plotting for all samples..
## Plotting for TumorSample1..
## Plotting for TumorSample2..
## Plotting for TumorSample3..
# [CRD] Plotting the clustered read-depth ratio (RDR) for each sample in output/plots/readdepthratio_clustered.pdf
## Plotting for TumorSample1..
## Plotting for TumorSample2..
## Plotting for TumorSample3..
# [BAF] Plotting B-allele frequency (BAF) for all samples in output/plots/ballelefrequency.pdf
## Plotting for all samples..
## Plotting for TumorSample1..
## Plotting for TumorSample2..
## Plotting for TumorSample3..
# [CBAF] Plotting the clustered B-allele frequency (BAF) for each sample in output/plots/ballelefrequency_clustered.pdf
## Plotting for TumorSample1..
## Plotting for TumorSample2..
## Plotting for TumorSample3..
# [BB] Plotting RDR-BB for all samples in output/plots/bb.pdf
## Plotting for TumorSample1..
## Plotting for TumorSample2..
## Plotting for TumorSample3..
# [CBB] Plotting clustered RDR-BB for all samples in output/plots/bb_clustered.png
### Provide a .seg file to also plot CLUSTER
# Checking and parsing input arguments
## solver: /home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/solve
## input: output/bbc/bulk
## seg: output/bbc/bulk.seg
## bbc: output/bbc/bulk.bbc
## ln: 2
## un: 8
## clonal: None
## ampdel: True
## d: None
## eD: 6
## eT: 12
## ts: 0.008
## tc: 1
## td: 0.01
## tR: 0.08
## tB: 0.04
## mR: 0.08
## mB: 0.04
## limit: None
## g: 0.3
## p: 400
## j: 40
## r: None
## s: None
## m: None
## u: 0.03
## f: None
## M: 2
## x: output/results
## diploid: False
## tetraploid: False
## v: 2
# Reading and parsing clustered bins in BBC file
# Reading and parsing bin clusters in SEG file
# Computing the cluster sizes
# Filtering clusters based on covered genome size and covered chromosomes
# Selected clusters: 1, 14, 28, 30, 12
## Features of selected clusters:
## 1: SIZE= 5850000.0 #CHRS= 1 (RDR, BAF)= (0.6626655658555413, 0.17185776939255906)-(0.6016436317410113, 0.09404540650850916)-(0.5949436110754068, 0.09374631065286687)
## 14: SIZE= 6850000.0 #CHRS= 1 (RDR, BAF)= (0.6656231936159769, 0.17043834911521052)-(0.9306147610101033, 0.3980729247929056)-(1.0790571523693369, 0.5)
## 28: SIZE= 700000.0 #CHRS= 1 (RDR, BAF)= (1.1134634768290284, 0.10095035798648624)-(1.0948092717597655, 0.3475073998337955)-(1.0853661350353168, 0.4998033104071832)
## 30: SIZE= 4450000.0 #CHRS= 1 (RDR, BAF)= (1.5397645599312657, 0.3564064934049476)-(1.2528119037391516, 0.4819188759123539)-(1.0794927197816984, 0.5)
## 12: SIZE= 16900000.0 #CHRS= 1 (RDR, BAF)= (1.1021982782211601, 0.5)-(1.0894320387312644, 0.5)-(1.076669153368248, 0.5)
# Finding the neutral diploid/tetraploid cluster
## Cluster selected as neutral (diploid/tetraploid) is 12
# Running diploid
## Running diploid with 2 clones
# Best objective found with 2 clones: 29.500410517906285
## Running diploid with 3 clones
# Best objective found with 3 clones: 4.841306550173054
## Running diploid with 4 clones
# Best objective found with 4 clones: 2.0480407187157303
## Running diploid with 5 clones
# Best objective found with 5 clones: 1.133205978871793
## Running diploid with 6 clones
# Best objective found with 6 clones: 1.1227910253062714
## Running diploid with 7 clones
# Best objective found with 7 clones: 1.2679437303553873
## Running diploid with 8 clones
# Best objective found with 8 clones: 1.406136696701761
# Finding clonal clusters and their copy numbers
## Found pattern of size 22750000.0: {'12': (2, 2), '1': (2, 0)}
## Chosen pattern of size 22750000.0: {'12': (2, 2), '1': (2, 0)}
# Running tetraploid
## Running tetraploid with 2 clones
/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/solve/utils.py:42: UserWarning: Please specify a single cluster when CN_A+CN_B=2
warnings.warn('Please specify a single cluster when CN_A+CN_B=2')
# Best objective found with 2 clones: 66.94580753036496
## Running tetraploid with 3 clones
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/solve/cd.py", line 52, in _work
return worker.run(cd.hcA, cd.hcB, u, max_iters=max_iters, max_convergence_iters=max_convergence_iters)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/solve/cd.py", line 25, in run
carch_results = carch.run(self.solver_type)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/solve/ilp_subset.py", line 544, in run
results = solver.solve(self.model, **kwargs)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/pyomo/opt/base/solvers.py", line 578, in solve
_status = self._apply_solver()
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/pyomo/opt/solver/shellcmd.py", line 246, in _apply_solver
self._rc, self._log = self._execute_command(self._command)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/pyomo/opt/solver/shellcmd.py", line 314, in _execute_command
results = subprocess.run(
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/subprocess.py", line 507, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/subprocess.py", line 1134, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/subprocess.py", line 2005, in _communicate
self.wait(timeout=self._remaining_time(endtime))
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/subprocess.py", line 1189, in wait
return self._wait(timeout=timeout)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/subprocess.py", line 1909, in _wait
raise TimeoutExpired(self.args, timeout)
subprocess.TimeoutExpired: Command '['/home/apandey/.conda/envs/hatchet/bin/glpsol', '--tmlim', '100', '--write', '/tmp/tmp97pavahn.glpk.raw', '--wglp', '/tmp/tmp31n6xarg.glpk.glp', '--cpxlp', '/tmp/tmpp7x59ojm.pyomo.lp']' timed out after 100.99989527789876 seconds
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/__main__.py", line 71, in <module>
sys.exit(main())
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/__main__.py", line 65, in main
globals()[command](args)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/run.py", line 151, in main
hatchet_main(
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/bin/HATCHet.py", line 205, in main
tetraploidObjs = runningTetraploid(clonal=clonal, scale=scale, size=size, args=args)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/bin/HATCHet.py", line 611, in runningTetraploid
results.append((n, execute(args, basecmd, n, outprefix), outprefix))
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/bin/HATCHet.py", line 651, in execute
return execute_python(config.compute_cn.solver, args, n, outprefix)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/bin/HATCHet.py", line 623, in execute_python
obj, cA, cB, u, cluster_ids, sample_ids = solve(
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/solve/__init__.py", line 72, in solve
return cd.run(solver_type=solver, max_iters=max_iters, n_seed=n_seed, j=n_worker, random_seed=random_seed)
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/utils/solve/cd.py", line 76, in run
results = future.result()
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/home/apandey/.conda/envs/hatchet/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
subprocess.TimeoutExpired: Command '['/home/apandey/.conda/envs/hatchet/bin/glpsol', '--tmlim', '100', '--write', '/tmp/tmp97pavahn.glpk.raw', '--wglp', '/tmp/tmp31n6xarg.glpk.glp', '--cpxlp', '/tmp/tmpp7x59ojm.pyomo.lp']' timed out after 100.99989527789876 seconds
exit
srun: error: papr-res-compute07: task 0: Exited with exit code 1
srun: Terminating job step 8606280.0
I am getting a similar error but with glpk as the solver
[0m/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/solve/utils.py:42: UserWarning: Please specify a single cluster when CN_A+CN_B=2
warnings.warn('Please specify a single cluster when CN_A+CN_B=2')
Traceback (most recent call last):
File "/home/sahays/miniconda3/envs/HATCHET/bin/hatchet", line 33, in <module>
sys.exit(load_entry_point('hatchet==0.4.4', 'console_scripts', 'hatchet')())
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/__main__.py", line 65, in main
globals()[command](args)
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/run.py", line 156, in main
] + extra_args
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 205, in main
tetraploidObjs = runningTetraploid(clonal=clonal, scale=scale, size=size, args=args)
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 611, in runningTetraploid
results.append((n, execute(args, basecmd, n, outprefix), outprefix))
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 651, in execute
return execute_python(config.compute_cn.solver, args, n, outprefix)
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 637, in execute_python
max_iters=args['f']
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/solve/__init__.py", line 59, in solve
copy_numbers = parse_clonal(clonal)
File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/solve/utils.py", line 44, in parse_clonal
raise ValueError('Cannot specify two clusters with same CN_A+CN_B')
ValueError: Cannot specify two clusters with same CN_A+CN_B
When using the cbc solver in a docker environment, I am also receiving multiple errors, "WARNING: Loading a SolverResults object with an 'aborted' status, but containing a solution".
In the end the program also quits with a final ValueError: Cannot specify two clusters with same CN_A+CN_B.
Thank you for your feedback! There seems to be some issue with the alternative solvers and we are looking into it. We will update on this soon.
@ahwanpandey Thank you for your report; we would like to ask if you could please re-run your example by increasing the maximum time limit for every iteration of compute-cn
. @vineetbansal could you please describe how this new feature can be tested?
I am getting a similar error but with glpk as the solver
�[0m/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/solve/utils.py:42: UserWarning: Please specify a single cluster when CN_A+CN_B=2 warnings.warn('Please specify a single cluster when CN_A+CN_B=2') Traceback (most recent call last): File "/home/sahays/miniconda3/envs/HATCHET/bin/hatchet", line 33, in <module> sys.exit(load_entry_point('hatchet==0.4.4', 'console_scripts', 'hatchet')()) File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/__main__.py", line 65, in main globals()[command](args) File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/run.py", line 156, in main ] + extra_args File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 205, in main tetraploidObjs = runningTetraploid(clonal=clonal, scale=scale, size=size, args=args) File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 611, in runningTetraploid results.append((n, execute(args, basecmd, n, outprefix), outprefix)) File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 651, in execute return execute_python(config.compute_cn.solver, args, n, outprefix) File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/bin/HATCHet.py", line 637, in execute_python max_iters=args['f'] File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/solve/__init__.py", line 59, in solve copy_numbers = parse_clonal(clonal) File "/home/sahays/miniconda3/envs/HATCHET/lib/python3.7/site-packages/hatchet/utils/solve/utils.py", line 44, in parse_clonal raise ValueError('Cannot specify two clusters with same CN_A+CN_B') ValueError: Cannot specify two clusters with same CN_A+CN_B
@shashwatsahay Your error does not seem to be related to the original issue. To better investigate this, could you please provide us with the full log message from HATCHet's run? We particularly need the output of the step compute-cn
.
Moreover, if you have the possibility, it would be extremely helpful if you could please provide the temporary files best.bbc
and best.seg
within the folder bbc
so that we can try to reproduce your issue.
Thank you for your cooperation
@simozacca Thank you for looking into this you can find the log files here.
for the test file could you please provide me with an email address as I am a bit uncomfortable sharing the data on an open platform.
I have removed the paths for the security purposes
@simozacca Thank you for looking into this you can find the log files here.
for the test file could you please provide me with an email address as I am a bit uncomfortable sharing the data on an open platform.
I have removed the paths for the security purposes
@shashwatsahay Thank you for this! We believe that there is a bug in the code that is running alternative solvers and we are working on a fix. We will let you know once is ready and you can test the new version on your data (@vineetbansal will update you on the required steps for this test). If this will work, we do not need the temporary files, otherwise we will provide you our emails to receive the temporary files for further investigation.
Thank you so much for your cooperation.
@ahwanpandey Thank you for your report; we would like to ask if you could please re-run your example by increasing the maximum time limit for every iteration of
compute-cn
Hi, I'm having the same issue (using 0.4.6). It looks like the default is no maximum timelimit -s, --timelimit
? https://github.com/raphael-group/hatchet/blob/master/docs/source/doc_compute_cn.md but maybe I'm looking at the wrong option?
@tyamaguchi-ucla, @ahwanpandey - the time limit for every iteration of the ILP solver is internally set to 100 seconds. You can use the environment variable HATCHET_COMPUTE_CN_TIMELIMIT
(or, alternatively, if you're using a hatchet.ini
file, the [compute_cn].timelimit
variable) to set the time limit (in seconds) for the ILP step, in HATCHet 0.4.5 and above.
You're right in thinking that the --timelimit
flag should have worked, but currently it only affects the c++ gurobi implementation of the solver inside HATCHet. We will look to make the interface consistent in the next version so there's less confusion on this issue going forward. Thanks for working with us on the issue.
@tyamaguchi-ucla , @ahwanpandey - HATCHet 0.4.7
and later will correctly look at the --timelimit
parameter (time limit for each ILP iteration, in seconds) that you specify on the command line, for pyomo-supported
solvers. By default, there is no time limit. Can you upgrade to 0.4.7 and see if the TimeoutExpired
error goes away? Note that you may have to wait a really long time for things to finish, if the original 100 second limit (as was hardcoded in earlier versions of HATCHet
) was not sufficient for the solver to converge. glpk
is indeed very slow, cbc
may be faster. In general we recommend Gurobi if possible (see http://compbio.cs.brown.edu/hatchet/README.html#using-gurobi) since it is substantially faster in our experiments.
@shashwatsahay , @ardydavari - the error you were seeing was indeed a bug when using pyomo-related solvers, and has been fixed in HATCHet 0.4.6
and later. Can you reinstall HATCHet
and try again?
@vineetbansal thanks for the update. The update would be helpful. I used 0.4.6
and changed the timelimit from 100 to 1000 and it worked but as you mentioned, it took a few days to finish using the cbc
solver. I started using Gurobi
, which was able to process the same sample in a few hours.
@simozacca Hi, I am using the hatchet 0.4.14 with cbc solver. But I meet a similar problem. Any suggestion? Thanks in adavnce!
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/users/.conda/envs/hatchet/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/users/.conda/envs/hatchet/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/users/.conda/envs/hatchet/lib/python3.9/site-packages/hatchet/main.py", line 66, in
Loading a SolverResults object with an 'aborted' status, but containing a solution
Hello,
I am getting some errors trying to run the demo data.
Here's the relevant output from the run.
Also, using the cbc solver and running on a compute node with 64gb and 8 cores.
Thanks, Ahwan