Open alexyfyf opened 4 years ago
Please post slurm*.out
and cromwell.out
.
Please post
slurm*.out
andcromwell.out
.
Hi, I commit similar error with caper 2.0/chip-seq-pipeline2 v2.0. I installed chip-seq-pipeline2 using conda and installed the caper in encode-chip-seq-pipeline environment using pip.
Here is the actual sbatch command line used for submitting a job (in cromwell.out
):
for ITER in 1 2 3; do
sbatch --export=ALL -J cromwell_035dea6f_read_genome_tsv -D /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv -o /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv/execution/stdout -e /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv/execution/stderr \
-p intel-e5,amd-ep2 --account yangling \
-n 1 --ntasks-per-node=1 --cpus-per-task=1 --mem=2048M --time=240 \
\
/storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv/execution/script.caper && break
sleep 30
done
Please check if these resource parameters work on your cluster:
-p intel-e5,amd-ep2 --account yangling \
-n 1 --ntasks-per-node=1 --cpus-per-task=1 --mem=2048M --time=240 \
Also, do not activate Conda environment. If you want to use conda then use caper run ... --conda
. Caper internally run conda run -n ENV_NAME JOB_SCRIPT
. You can also use --singularity
if you have Singularity installed on your cluster. I recommend Singularity.
Here is the actual sbatch command line used for submitting a job (in
cromwell.out
):for ITER in 1 2 3; do sbatch --export=ALL -J cromwell_035dea6f_read_genome_tsv -D /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv -o /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv/execution/stdout -e /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv/execution/stderr \ -p intel-e5,amd-ep2 --account yangling \ -n 1 --ntasks-per-node=1 --cpus-per-task=1 --mem=2048M --time=240 \ \ /storage/hpc/yangling/Projects/Singularity/chip/chip-data/chip/035dea6f-f3c2-47ce-82b0-e19174e47a3b/call-read_genome_tsv/execution/script.caper && break sleep 30 done
Please check if these resource parameters work on your cluster:
-p intel-e5,amd-ep2 --account yangling \ -n 1 --ntasks-per-node=1 --cpus-per-task=1 --mem=2048M --time=240 \
Also, do not activate Conda environment. If you want to use conda then use
caper run ... --conda
. Caper internally runconda run -n ENV_NAME JOB_SCRIPT
. You can also use--singularity
if you have Singularity installed on your cluster. I recommend Singularity.
Thank you very much! Actually, I wanna install an standalone version of caper and chip-seq-pipeline2, so I tried to install caper in another conda environment or singularity image...
Hi team,
I'm using the ENCODE chip-seq-pipeline2 and installed the conda environment for it. I also edited the
~/.caper/default.conf
as follow:Then I activated the conda environment and run this command as per your manual
sbatch -A ls25 -p genomics --qos=genomics -J chip-seq --export=ALL --mem 4G -t 4:00:00 --wrap 'caper run /home/fyan0011/ls25_scratch/feng.yan/software/chip-seq-pipeline2/chip.wdl -i template.json'
I noticed the qos flag seems not used according to the logs, anyway, the job was submitted, but no children job was seen.The slurm out file showed that jobs are
Could you help with this? Thank you!