Open biofuture opened 5 years ago
Hi @biofuture ,
you should comment the executor = 'pbs'
line in the nextflow.config
file.
By leaving the executor set to pbs, the pipeline expects you to have a PBS/Torque family of batch schedulers, which is not the case!
By the way, you don't need root permission to run YAMP on HPC. You can install Nextflow on your local area, and also download YAMP locally. The batch schedulers will take care of YAMP execution on the nodes.
Hope this helps!
I will have try. Thanks
Please check the eror message:
nextflow run YAMP.nf --reads1 /srv/scratch/mrcseq/Firstbactchshotgun/GON6648-6714_Pool1/GON6648A1/SCO_01_S90_R1_001.fastq.gz --reads2 /srv/scratch/mrcseq/Firstbactchshotgun/GON6648-6714_Pool1/GON6648A1/SCO_01_S90_R2_001.fastq.gz --prefix SCO_01 --outdir /srv/scratch/mrcseq/Firstbactchshotgun/LIVER/SCO_01 --mode QC cannot allocate memory for thread-local data: ABORT NOTE: Nextflow is trying to use the Java VM defined by the following environment variables: JAVA_CMD: /srv/scratch/mrcbio/bin/miniconda3/envs/YAMPconda/bin/java NXF_OPTS:
I comment it like this
//executor should be set to 'pbs' when a resource manager belonging to the
//PBS/Torque family of batch schedulers is used, or set to 'sge' when using
//a Sun Grid Engine cluster (or a compatible platform, e.g., Open Grid Engine)
//executor = 'pbs'
//Set the used queue, this queue will be used for all the processes
//queue = 'mrcbio48'
Hi @biofuture, yes, you commented the executor and queue directives correctly. The problem now seems to arise because you don't have enough memory. Have you tried to look at the Java VM environment variables defined in your conda environment?
Hi,
I installed YAMP under a conda environment "YAMPconda" by installing all the dependent software under my the YAMPconda. I installed the nextflow already.
I activated my conda environment YAMPconda so all the dependent software can be accessed by default. I then run the command, it always comes up with an error message
Launching
YAMP.nf
[marvelous_church] - revision: 79aff4c7d7 WARN: Unknown directivejobName
for processdedup
[warm up] executor > pbs WARN: Unknown directivejobName
for processtrim
WARN: Unknown directivejobName
for processdecontaminate
WARN: Unknown directivejobName
for processqualityAssessment
WARN: Unknown directivejobName
for processprofileTaxa
WARN: Unknown directivejobName
for processalphaDiversity
WARN: Unknown directivejobName
for processprofileFunction
WARN: Unknown directivejobName
for processlogQC
WARN: Unknown directivejobName
for processsaveQCtmpfile
WARN: Unknown directivejobName
for processlogCC
WARN: Unknown directivejobName
for processsaveCCtmpfile
executor > pbs (3) [40/3e348e] process > qualityAssessment [ 0%] 0 of 2The pipeline stacks here.
It seems that the YAMP will submit my the jobs to a computing node in the queen, however, there is no YAMPconda activate under that computing node. So I think if I run the YAMP locally after activate the YAMPconda, I can run the pipleline. As I can not install the YAMP under all computing node by myself, I have to install the software only under my account.
Can you help to figure out how to run YAMP for my case? I Can not install YAMP on HPC as manager. But I still want to run the YAMP on HPC.