ncbi / pgap

NCBI Prokaryotic Genome Annotation Pipeline
Other
316 stars 88 forks source link

[BUG] Binding non-existent file #281

Open kkrick-sdsu opened 12 months ago

kkrick-sdsu commented 12 months ago

Describe the bug PGAP fails to start a singularity container because it is attempting to bind a file that does not yet exist.

To Reproduce Using PGAP version 2023-10-03.build7061. Followed steps from quick start.

Starting directory structure:

drwxrwxr-x. 3 kkrick kkrick 4.0K Dec  4 15:16 .
drwxr-xr-x. 4 kkrick kkrick  110 Dec  4 11:54 ..
-rw-rw-r--. 1 kkrick kkrick  440 Dec  4 15:05 pgap.slurm

Submitting SLURM job with the following sbatch script named pgap.slurm:

#!/bin/env bash

#SBATCH --job-name=pgap-test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=16G
#SBATCH --time=05:00:00

/grp/rci/pgap/pgap.py -r -o /home/kkrick/Documents/pgap-test/mg37_results -g $HOME/.pgap/test_genomes/MG37/ASM2732v1.annotation.nucleotide.1.fasta -s 'Mycoplasmoides genitalium' --debug

Results in the following directory structure:

drwxrwxr-x. 3 kkrick kkrick 4.0K Dec  4 15:33 .
drwxr-xr-x. 4 kkrick kkrick  110 Dec  4 11:54 ..
-rw-r--r--. 1 kkrick kkrick 575K Oct  4 08:53 ASM2732v1.annotation.nucleotide.1.fasta
-rw-rw-r--. 1 kkrick kkrick  127 Dec  4 15:33 input.yaml
drwxrwxr-x. 3 kkrick kkrick   50 Dec  4 15:33 mg37_results
-rw-rw-r--. 1 kkrick kkrick  338 Dec  4 15:32 pgap.slurm
-rw-------. 1 kkrick kkrick  223 Dec  4 15:33 pgap_input_1zzxtcbo.yaml
-rw-------. 1 kkrick kkrick  460 Dec  4 15:33 pgap_submol_nf64t178.yaml
-rw-rw-r--. 1 kkrick kkrick    0 Dec  4 15:33 slurm-18657.out
-rw-rw-r--. 1 kkrick kkrick   55 Dec  4 15:33 submol.yaml

slurm-18657.out:

PGAP version 2023-10-03.build7061 is up to date.
Output will be placed in: /home/kkrick/Documents/pgap-test/mg37_results
PGAP failed, docker exited with rc = 1
Unable to find error in log file.

cwltool.log:

Original command: /grp/rci/pgap/pgap.py -r -o /home/kkrick/Documents/pgap-test/mg37_results -g /home/kkrick/.pgap/test_genomes/MG37/ASM2732v1.annotation.nucleotide.1.fasta -s Mycoplasmoides genitalium --debug

Docker command: /usr/bin/singularity exec --bind /home/kkrick/.pgap/input-2023-10-03.build7061:/pgap/input:ro --bind /home/kkrick/Documents/pgap-test:/pgap/user_input --bind /home/kkrick/Documents/pgap-test/pgap_input_1zzxtcbo.yaml:/pgap/user_input/pgap_input.yaml:ro --bind /tmp:/tmp:rw --bind /home/kkrick/Documents/pgap-test/mg37_results:/pgap/output:rw --bind /home/kkrick/Documents/pgap-test/mg37_results/debug/log:/log/srv --pwd /pgap docker://ncbi/pgap:2023-10-03.build7061 /bin/taskset -c 0-3 cwltool --timestamps --debug --disable-color --preserve-entire-environment --outdir /pgap/output --tmpdir-prefix /pgap/output/debug/tmpdir/ --leave-tmpdir --tmp-outdir-prefix /pgap/output/debug/tmp-outdir/ --copy-outputs pgap/pgap.cwl /pgap/user_input/pgap_input.yaml

--- Start YAML Input ---
fasta:
    class: File
    location: ASM2732v1.annotation.nucleotide.1.fasta
submol:
    class: File
    location: pgap_submol_nf64t178.yaml
supplemental_data: { class: Directory, location: /pgap/input }
report_usage: true
--- End YAML Input ---

--- Start Runtime Report ---
{
    "CPU cores": 48,
    "Docker image": "ncbi/pgap:2023-10-03.build7061",
    "cpu flags": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities",
    "cpu model": "Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz",
    "max user processes": 378579,
    "memory (GiB)": 503.6,
    "memory per CPU core (GiB)": 10.5,
    "open files": 131072,
    "tmp disk space (GiB)": 434.4,
    "virtual memory": "unlimited",
    "work disk space (GiB)": 135545.7
}
--- End Runtime Report ---

INFO:    Using cached SIF image
taskset: failed to set pid 0's affinity: Invalid argument

I attempted to run the docker command directly and got the following error:

INFO:    Using cached SIF image
WARNING: skipping mount of /home/kkrick/Documents/pgap-test/pgap_input_1zzxtcbo.yaml: stat /home/kkrick/Documents/pgap-test/pgap_input_1zzxtcbo.yaml: no such file or directory
FATAL:   container creation failed: mount /home/kkrick/Documents/pgap-test/pgap_input_1zzxtcbo.yaml->/pgap/user_input/pgap_input.yaml error: while mounting /home/kkrick/Documents/pgap-test/pgap_input_1zzxtcbo.yaml: mount source /home/kkrick/Documents/pgap-test/pgap_input_1zzxtcbo.yaml doesn't exist

So, it appears to be failing because the file pgap_input_1zzxtcbo.yaml does not exist.

Expected behavior PGAP should run successfully.

Software versions (please complete the following information):

Log Files Ran with --debug but the debug and debug/log directories were empty.

Additional context I had read some troubleshooting from other reported issues and tried this for the sbatch script, same results:

#!/bin/env bash

#SBATCH --job-name=pgap-test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=16G
#SBATCH --time=05:00:00

/grp/rci/pgap/pgap.py -r -o /home/kkrick/Documents/pgap-test/mg37_results -g $HOME/.pgap/test_genomes/MG37/ASM2732v1.annotation.nucleotide.1.fasta -s 'Mycoplasmoides genitalium' --docker singularity --ignore-all-errors --container-path /grp/rci/pgap/pgap_2023-10-03.build7061.sif --debug
azat-badretdin commented 12 months ago

Thank you for your detailed report (including attempts to mitigate this on your side, appreciated!), Kyle!

Yes, SLURM environment tends to be problematic for PGAP. We will assess this situation ASAP.

azat-badretdin commented 12 months ago

Question:

Does

#SBATCH --cpus-per-task=4

work like setting envar SLURM_CPUS_PER_TASK?

azat-badretdin commented 12 months ago

Note that the file about which singularity complains, does exist, according to your listing. Is it possible that you have some kind of local singularity settings that disfavor your directory as a source of mount?

Also, I just found out, in our FAQ, that:

While nothing in the software intentionally prevents use on a cluster, we cannot provide assistance for this use case, given the additional complexity

kkrick-sdsu commented 12 months ago

Hi Azat,

As it turns out, the file exists temporarily. I cleaned up the directory and re-ran the batch job. That listing was grabbed while PGAP was trying to run. Once the run fails, a secondary listing shows that it the file is no longer there. PGAP must be deleting the file after the run fails. With that revelation, I am not sure what is the actual problem.

From my understanding, SLURM_CPUS_PER_TASK gets set and is useable while the job is running (for instance, to pass to a program so that it knows the actual core count it has to deal with) and the sbatch flag --cpus-per-task actually controls how many are requested during scheduling.

I understand about not being able to offer support. On that FAQ I do see that --no-internet may help, so I will try that as well.

For what it is worth, this was working with PGAP version 2023-05-17.build6771. I wish I had known about the --no-self-update flag, as my woes started when PGAP updated itself.

azat-badretdin commented 12 months ago

Kyle, you can still run the May version, by using use-version parameter and, as you discovered yourself, --no-self-update flag.

azat-badretdin commented 12 months ago

As it turns out, the file exists temporarily. I cleaned up the directory and re-ran the batch job. That listing was grabbed while PGAP was trying to run. Once the run fails, a secondary listing shows that it the file is no longer there. PGAP must be deleting the file after the run fails. With that revelation, I am not sure what is the actual problem.

I opened an internal investigation (code PGAPX-1229) for this, Kyle.