Closed desmodus1984 closed 3 years ago
Hi @desmodus1984,
We recommend running ABySS in the Bloom filter mode, which is more efficient and uses significantly less memory: https://github.com/bcgsc/abyss#assembling-using-a-bloom-filter-de-bruijn-graph
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
I have the same error running transabyss
Please report
abyss-pe version
lsb_release -d
Assembly error
abyss-pe
command line module load python source activate busco abyss-pe k=63 np=48 name=myse-63 lib='BGI' BGI='/fs/scratch/PHS0338/appz/musket-1.1/BGI-Readz.0 /fs/scratch/PHS0338/appz/musket-1.1/BGI-Readz.1' \ long='ONT' ONT='/fs/scratch/PHS0338/appz/Ratatosk/bin/Rata-T3-Corr10K.fasta'[ ] last 20 lines of the output of
abyss-pe
An error occurred in MPI_Init on a NULL communicator MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, and potentially your MPI job) [p0902.ten.osc.edu:168669] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processe$
The application appears to have been direct launched using "srun", but OMPI was not built with SLURM's PMI support and therefore cannot execute. There are several options for building PMI support under SLURM, depending upon the SLURM version you are using: version 16.05 or later: you can use SLURM's PMIx support. This requires that you configure and build SLURM --with-pmix. Versions earlier than 16.05: you must use either SLURM's PMI-1 or PMI-2 support. SLURM builds PMI-1 by default, or you can manually install PMI-2. You must then build Open MPI using --with-pmi pointing to the SLURM PMI library location. Please configure as appropriate and try again.
An error occurred in MPI_Init on a NULL communicator MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, and potentially your MPI job) [p0902.ten.osc.edu:168668] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processe$ [p0902.ten.osc.edu:168622] OPAL ERROR: Unreachable in file pmix3x_client.c at line 112 The application appears to have been direct launched using "srun", but OMPI was not built with SLURM's PMI support and therefore cannot execute. There are several options for building PMI support under SLURM, depending upon the SLURM version you are using: version 16.05 or later: you can use SLURM's PMIx support. This requires that you configure and build SLURM --with-pmix. Versions earlier than 16.05: you must use either SLURM's PMI-1 or PMI-2 support. SLURM builds PMI-1 by default, or you can manually install PMI-2. You must then build Open MPI using --with-pmi pointing to the SLURM PMI library location. Please configure as appropriate and try again.
An error occurred in MPI_Init on a NULL communicator MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, and potentially your MPI job) [p0902.ten.osc.edu:168622] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processe$ make: *** [/users/PHS0338/jpac1984/.conda/envs/busco/bin/abyss-pe.Makefile:552: myse-63-1.fa] Error 1
Build error
Consider installing ABySS using Homebrew on either Linux or macOS with
brew install abyss
, or using Bioconda withconda install abyss
.gcc --version
./configure
command line./configure
make
I was waiting for the conda update, It got updated, and then I updated and then it looks like something failed after the update and it is making the execution of the task impossible. BEfore the update, just the Rresolver module failed, and the job ended and now the job fails immediately.
Thanks;