I edited config.yml and ran on input of a ~40,000 reads (originally 100,000 before pychopper) for a test. However it only seems to make the
config.pdf
`tm1612s-MacBook-Pro:pipeline-pinfish-analysis callum$ snakemake --use-conda -j all
Building DAG of jobs...
Creating conda environment /Users/callum/pipeline-pinfish-analysis/env.yml...
Downloading remote packages.
Environment for ../../../../env.yml created (location: .snakemake/conda/ba04bccc)
Using shell: /bin/bash
Provided cores: 4
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 build_minimap_index
1
Activating conda environment: /Users/callum/pipeline-pinfish-analysis/Users/callum/Sync_later/pipeline-pinfish-analysis_HDF/.snakemake/conda/ba04bccc
[M::mm_idx_gen::86.9001.60] collected minimizers
[M::mm_idx_gen::116.8361.66] sorted minimizers
[M::main::140.9161.53] loaded/built the index for 455 target sequence(s)
[M::mm_idx_stat] kmer size: 15; skip: 10; is_hpc: 0; #seq: 455
[M::mm_idx_stat::142.3351.52] distinct minimizers: 100202295 (37.96% are singletons); average occurrences: 5.732; average spacing: 5.587
[M::main] Version: 2.15-r905
[M::main] CMD: minimap2 -t 2 -k15 -I 1000G -d index/genome_index.mmi /Users/callum/Genome_files/hg38.fa
[M::main] Real time: 143.857 sec; CPU: 218.279 sec; Peak RSS: 7.965 GB
[Thu Jan 31 00:56:27 2019]
Finished job 0.
1 of 1 steps (100%) done
Complete log: /Users/callum/pipeline-pinfish-analysis/.snakemake/log/2019-01-31T005121.607446.snakemake.log`
I can confirm index mmi file was outputted to working directory given. I attach config.yml file in case there is some issue that it doesn't proceed with next steps?
I edited config.yml and ran on input of a ~40,000 reads (originally 100,000 before pychopper) for a test. However it only seems to make the config.pdf
`tm1612s-MacBook-Pro:pipeline-pinfish-analysis callum$ snakemake --use-conda -j all
Building DAG of jobs...
Creating conda environment /Users/callum/pipeline-pinfish-analysis/env.yml...
Downloading remote packages.
Environment for ../../../../env.yml created (location: .snakemake/conda/ba04bccc)
Using shell: /bin/bash
Provided cores: 4
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 build_minimap_index
1
[Thu Jan 31 00:54:03 2019] rule build_minimap_index: input: /Users/callum/Genome_files/hg38.fa output: index/genome_index.mmi jobid: 0 threads: 2
Activating conda environment: /Users/callum/pipeline-pinfish-analysis/Users/callum/Sync_later/pipeline-pinfish-analysis_HDF/.snakemake/conda/ba04bccc [M::mm_idx_gen::86.9001.60] collected minimizers [M::mm_idx_gen::116.8361.66] sorted minimizers [M::main::140.9161.53] loaded/built the index for 455 target sequence(s) [M::mm_idx_stat] kmer size: 15; skip: 10; is_hpc: 0; #seq: 455 [M::mm_idx_stat::142.3351.52] distinct minimizers: 100202295 (37.96% are singletons); average occurrences: 5.732; average spacing: 5.587 [M::main] Version: 2.15-r905 [M::main] CMD: minimap2 -t 2 -k15 -I 1000G -d index/genome_index.mmi /Users/callum/Genome_files/hg38.fa [M::main] Real time: 143.857 sec; CPU: 218.279 sec; Peak RSS: 7.965 GB [Thu Jan 31 00:56:27 2019] Finished job 0. 1 of 1 steps (100%) done Complete log: /Users/callum/pipeline-pinfish-analysis/.snakemake/log/2019-01-31T005121.607446.snakemake.log`
I can confirm index mmi file was outputted to working directory given. I attach config.yml file in case there is some issue that it doesn't proceed with next steps?