Closed yleniach closed 1 year ago
Are you sure that you are not confusing memory with disk space? There are not many compute systems available with 4Tb of memory available on a single node.
What does $ cat /proc/meminfo
say on the node where you ran FALCON?
Hi
The cluster people said that it is 4Tb memory on the node yes. This is why he was surprised that it kept failing and the error message said that it run out of memory. Should I send our input files?
Thanks Ylenia
Get Outlook for iOShttps://aka.ms/o0ukef
From: Greg @.> Sent: Tuesday, February 21, 2023 7:01:15 PM To: PacificBiosciences/pbbioconda @.> Cc: Ylenia Chiari @.>; Author @.> Subject: Re: [PacificBiosciences/pbbioconda] Issue with memory and assembly (Issue #567)
Are you sure that you are not confusing memory with disk space? There are not many compute systems available with 4Tb of memory available on a single node. What does $ cat /proc/meminfo say on the node where you ran FALCON?
— Reply to this email directly, view it on GitHubhttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FPacificBiosciences%2Fpbbioconda%2Fissues%2F567%23issuecomment-1439245687&data=05%7C01%7Cychiari%40gmu.edu%7Cdc950c1cb4514e77a71808db1467eb54%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638126208789237978%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=wrM%2FngHe0o2tphC2OwgNMQyEOiiE6q7G2A%2Fye8C3kWI%3D&reserved=0, or unsubscribehttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAV3GADLWKOL3GHBXZW3DNL3WYVJMXANCNFSM6AAAAAAVDJJQTM&data=05%7C01%7Cychiari%40gmu.edu%7Cdc950c1cb4514e77a71808db1467eb54%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638126208789237978%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6E3ExzgvVklucn9U6%2FnC7vIKi7aZpVe%2BF%2F%2Byikas2Dc%3D&reserved=0. You are receiving this because you authored the thread.Message ID: @.***>
Hi,
This is what I get when I type the code you indicate. MemTotal: 394420832 kB MemFree: 103746912 kB MemAvailable: 373570876 kB
I am also sending the configuration files attached to this email. Thanks Ylenia
From: Greg @.> Sent: Tuesday, February 21, 2023 7:01 PM To: PacificBiosciences/pbbioconda @.> Cc: Ylenia Chiari @.>; Author @.> Subject: Re: [PacificBiosciences/pbbioconda] Issue with memory and assembly (Issue #567)
Are you sure that you are not confusing memory with disk space? There are not many compute systems available with 4Tb of memory available on a single node. What does $ cat /proc/meminfo say on the node where you ran FALCON?
— Reply to this email directly, view it on GitHubhttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FPacificBiosciences%2Fpbbioconda%2Fissues%2F567%23issuecomment-1439245687&data=05%7C01%7Cychiari%40gmu.edu%7Cdc950c1cb4514e77a71808db1467eb54%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638126208789237978%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=wrM%2FngHe0o2tphC2OwgNMQyEOiiE6q7G2A%2Fye8C3kWI%3D&reserved=0, or unsubscribehttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAV3GADLWKOL3GHBXZW3DNL3WYVJMXANCNFSM6AAAAAAVDJJQTM&data=05%7C01%7Cychiari%40gmu.edu%7Cdc950c1cb4514e77a71808db1467eb54%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638126208789237978%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6E3ExzgvVklucn9U6%2FnC7vIKi7aZpVe%2BF%2F%2Byikas2Dc%3D&reserved=0. You are receiving this because you authored the thread.Message ID: @.***>
ok, so the particular node where you ran $ cat /proc/meminfo
only has 394Gb of ram, which i think should still probably be more than enough. The fact that this is failing in the beginning on the 0-rawreads step which isn't one of the memory intensive steps tells me this is probably just a misconfiguration issue in your falcon config file. I don't think email attachments to github issues work, you have to attach it directly to the issue itself.
I don't have the bandwith to run CLR assemblies these days, please post your config here and i'll see if I can provide some config advice.
Thanks for following up. I am attaching the config file here (I had to save it as txt otherwise I get a warning that the file type is not supported). Thanks fc_run_chaco_txt.txt
Ah, I see. You are running this locally and you have it configured to run 120 concurrent jobs with 4cpus & 2Tb of ram each and there is no way your 394Gb system can handle that.
[job.step.da]
NPROC=4
MB=2000000
njobs=120
Best case scenario for the box you are running on (with the ram that you posted) I would suggest dialing it back to something like
NPROC=4
MB=16000
njobs=24
and even that might be a little too ambitious. I would give it a go like that, but if it fails again, continue to pull back on the number of concurrent jobs njobs
, maybe going down to njobs=16
Hope this helps
i should add, njobs * MB
shouldn't exceed the amount of RAM on the node and njobs * NPROC
shouldn't exceed the number of processors on the node as the jobs will be running concurrently.
Thanks. This is very helpful, at least now I know what the issue may be and I can work on fixing it.
Thanks! Ylenia
i should add, njobs MB shouldn't exceed the amount of RAM on the node and njobs NPROC shouldn't exceed the number of processors on the node
— Reply to this email directly, view it on GitHubhttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FPacificBiosciences%2Fpbbioconda%2Fissues%2F567%23issuecomment-1440515218&data=05%7C01%7Cychiari%40gmu.edu%7C0af4388911b447b6090f08db14fe8ffb%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638126855795619759%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B58YgLnqro1Vt4C%2FUPN7M0R%2BSZIE48O%2F%2FNorV5XV%2B2U%3D&reserved=0, or unsubscribehttps://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAV3GADIWFUJASLWWUE27DNTWYZHYRANCNFSM6AAAAAAVDJJQTM&data=05%7C01%7Cychiari%40gmu.edu%7C0af4388911b447b6090f08db14fe8ffb%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638126855795619759%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=o6cWFkCDLg0YJB8Eb%2FqkBet4h3DPa8S78naxEwvcHfM%3D&reserved=0. You are receiving this because you authored the thread.Message ID: @.***>
Hi, I am still struggling with this same assembly. I decreased the #jobs in the fc_run_chaco.txt configuration file, but it keeps getting stuck going from the 1-prereads-ovl to the 2-asm-falcon. I attach the output file I got yesterday when the run failed and the config file I used. yc_chacofc-amd089-584305.txt
The good news is that all pre-assembly & overlapping steps have been completed and the job is failing on the final step. The bad news is i'm not exactly sure why and the error message isn't very clear on what the problem is. Is there any output at all in the 2-asm-falcon
dir?
Can you check the *.stderr
file in the 2-asm-falcon
dir if it exists?
Can you try running user_script.sh
by itself?
This issue may be relevant: https://github.com/PacificBiosciences/pbbioconda/issues/294
Thanks for the response. I tried running the command user_script.sh but maybe I am doing something wrong, because I got this message (base) [ychiari@hopper2 2-asm-falcon]$ sbatch user_script.sh sbatch: error: This does not look like a batch script. The first sbatch: error: line must start with #! followed by the path to an interpreter. sbatch: error: For instance: #!/bin/sh
the stderr file in the 2-asm-falcon exist but I don't know if it is completed. I am attaching a copy of what it is in the 2-asm-falcon. The weird thing is that the preads4falcon fasta file is a shortcut to the 1-folder. Also, I have asked for feedback to a colleague and she was telling me to look for the mypwatcher, but I don't have that anywhere in the main directory 2-asm-falcon.docx of the assembly or in the subdirectory
Also, I do have the --- in the preads.m4 file has in the issue you sent me. Any idea of why these --- are added to the file and if it is ok to just remove them as the issue indicates?
user_script.sh cannot be submitted as a bash script unless you add the shebang #!/bin/bash to the beginning of the file, which you could do
or you could run the script locally (on a node with enough resources) by doing this:
bash -ex user_script.sh
Based on your screenshot, the command is failing before anything is generated. It's not odd that preads4falcon.fasta is a shortcut to the 1-preads_ovl folder because that is the output from that step and the input for this step.
mypwatcher doesn't exist because that's for output when you run it in job_scheduler mode, but you are submitting it to your SLURM grid as a local script to run on a single machine so there is no need to watch for processes that were submitted to other nodes. Instead of output in the mypwatcher directory, you get the .stdout and .stderr files directly in the folders where the commands are run.
Again, i'll ask what is the output in the *.stderr file?
I don't think the --- is a problem here, I have that in successful runs. But removing it might be worth trying. I'm thinking that perhaps the wrong version of nim-falcon
got installed for whatever reason. Can you type conda list
and check the nim-falcon
version?
https://github.com/PacificBiosciences/pbbioconda/issues/294#issuecomment-703179264
I have the nim-falcon version 3.0.1. Which version should I have?
nim-falcon=3.0.2
is the version that should be installed, but due to conda intricacies if it's not specified, sometimes the wrong version gets installed. See this comment: https://github.com/PacificBiosciences/pbbioconda/issues/294#issuecomment-703179264
Try conda install nim-falcon=3.0.2
and re-run the final step of the pipeline
perfect thanks!
I tried installing it and I get an error message PackagesNotFoundError: The following packages are not available from current channels:
Any suggestion? I also contacted the people at the cluster to see if they can help. Thanks
Hi, we have tried several ways to install the nim-falcon version, but there is always some incompatibility. I am afraid that if we delete the previous version of falcon and reinstall it then we would need to re-run the entire assembly. Do you have any suggestion on what we should do? Thanks
That would be the recommendation, to remove the previous installation of FALCON, re-install and run the final step of the pipeline. As long as you don't delete any of the project run folders, you won't have to re-run any of the previous steps that have already completed. You can proceed from the final step.
Ok, thanks. I will let you know if it fix the problem (hopefully it does!). Thanks
Hi again, the people at the Univ cluster have tried to install the nim-falcon version in different ways and they keep running up into conflicts. This is what they wrote to me this morning - do you have any suggestions on how to fix the problem? Thanks
I have been trying in to install nim-falcon/3.0.2 with different versions of python with conda, but I have not had any luck overcoming the dependencies/conflicts. So far I've tried forcing it to create a conda environment with
python/3.8.5 (the default in the anaconda3 module) python/3.9.9 python/3.7.5
Did the solution here: https://github.com/PacificBiosciences/pbbioconda/issues/294#issuecomment-703179264 not work for you?
$ python --version
Python 3.9.6
$ conda create -n pba pb-assembly nim-falcon==3.0.2
$ conda activate pba
(pba) $ conda list nim-falcon
# packages in environment at /home/UNIXHOME/gconcepcion/miniconda3/envs/pba:
#
# Name Version Build Channel
nim-falcon 3.0.2 h18d090a_1 bioconda
The people at the cluster managed to install it using the codes you wrote. However, after I restarted the run, it failed almost immediately and I don't know why. I attach the error file and another file in case you can see from there where the issue is this time
it looks like even when using the right version of nim-falcon, I still get the same issue at the 2-asm stage.
Thanks for adding the run-P0d915f313fada5.bash.txt log. Is that the only logfile like that in the directory? Unfortunately it didn't provide any meaningful output that I can use to figure out why the run failed.
Can you try running the user_script.sh by itself? Add the shebang (#!/bin/bash) to the first line of the file and submit it to your slurm grid like sbatch -c 24 user_script.sh
Adjust -c 24
to however many cores are available for your system
Sorry I am not sure I understand. This is my .sh file - is this what you called the user_script.sh
if yes, then I should just submit it by adding the - c 24 (or whatever cores I write) after the sbatch command? I already have the info about the cores in my sh file fc_run.txt
I have other logfiles in the 2-asm folder run-P6f7e4b1c35df06.bash.stderr run-P6f7e4b1c35df06.bash.stdout
In the all log file it is written 2023-03-17 22:36:03,386 - pypeflow.simple_pwatcher_bridge:94 - ERROR - Task Node(2-asm-falcon) failed with exit-code=1
which is also what is written in the error file of this run yc_chacofc-amd089-617616.txt
Sorry I am not sure I understand. This is my .sh file - is this what you called the user_script.sh
if yes, then I should just submit it by adding the - c 24 (or whatever cores I write) after the sbatch command? I already have the info about the cores in my sh file fc_run.txt
No, the user_script.sh is the script that's inside of your 2-asm-falcon directory. That's the script that runs the final set of tasks (or job) for FALCON to complete the assembly. Try adding the shebang #!/bin/bash
to the first line of the script and submitting it like
$ cd 2-asm-falcon
2-asm-falcon$ sbatch -c 24 ./user_script.sh
I have other logfiles in the 2-asm folder run-P6f7e4b1c35df06.bash.stderr run-P6f7e4b1c35df06.bash.stdout
What do these logfiles say?
Ok, I found the user_script.sh file It starts with the following - does it look ok? the #!/bin/bash should be before all these lines, correct?
IFS=$' ' set -vxeuo pipefail hostname pwd date
which one of the .sh files in the 2-asm folder gets produced first? I am asking because all the .stout or .sterr files in 2-asm files reference the base conda env which runs the previous version of nim-falcon and not the new conda env. that I created with the updated nim-falcon version. So do I have to make sure that the right env is loaded./activated by adding this info to each of these files?
which one of the .sh files in the 2-asm folder gets produced first? I am asking because all the .stout or .sterr files in 2-asm files reference the base conda env which runs the previous version of nim-falcon and not the new conda env. that I created with the updated nim-falcon version. So do I have to make sure that the right env is loaded./activated by adding this info to each of these files?
One thing you can do to ensure the files are not old is just delete the 2-asm-falcon directory completely, then on the re-run it will get recreated with all new files.
For this it shouldn't matter though, you just need to submit the user_script.sh to your grid by itself which doesn't reference any environment variables at all, it just assumes all the necessary bins/executables are already in your path. You can see that by reading the user_script.sh that it doesn't reference your old environment at all, only the necessary falcon bins.
In order to make sure the user_script.sh has your correct conda environment when you submit it to the grid you need to first activate your conda environment, and then --export=ALL
when you submit. So the process should go something like this:
$ conda activate pba
(pba) $ cd 2-asm-falcon
(pba) 2-asm-falcon$ sbatch -c 24 --export=ALL ./user_script.sh
switch pba
to whatever your newly installed conda environment is called.
And yes, you need to add the shebang #!/bin/bash
to the top of user_script.sh in order for slurm to accept it as a valid bash script.
Thanks. Quick follow up question. If I remove the 2-asm falcon directory completely, do I restart the assembly using the normal sbatch fc_run.sh? basically the assembly will restart from the end of the 1-asm directory and rebuild the 2-asm directory if I just restart it?
Yes, if you remove the 2-asm-falcon directory, then you have to run it the normal way and wait for it to regenerate the 2-asm-falcon directory and then fail again. then you can submit the user_script.sh independently.
That's the "clean" way to do it, but you could also just submit the user_script.sh by itself without deleting the directory first, if you don't want to bother w/ the whole process. The stuff that's already there won't get in the way of the job itself.
Ok, thanks. I will try the "clean" way and see what happens
actually it already failed - I have no idea why. I don't even have the user.sh file anymore in the 2-asm directory now. This is my .sh file fc_run.txt and this is the end of the .log file (the log file is too large to attach here as it is) - there is still the same issue with the 2-asm directory, but I cannot change the user.sh file because it has not been created. Any idea of why now it stopped even sooner than previously?
this is also the error file of this run same error, but this time it didn't generate the user.sh file in 2-asm yc_chacofc-amd090-653844.err.txt
What do these two files say
2-asm-falcon/run-P0d915f313fada5.bash.stdout
2-asm-falcon/run-P0d915f313fada5.bash.stderr
run-P0d915f313fada5.bash.stderr.txt
The stout is too large - I can attach part of it here, unless you have a way for me to upload large files
just attach the end of it, the last few thousand lines or or so. or if you open it up and see a clear error, the part with the error
ohh actually, I do see this error in the stderr.txt you pasted:
+ /bin/bash task.sh
/opt/sw/other/apps/anaconda3/2020.11-py-3.8.5/bin/python3: No module named pypeflow.do_task
+++ pwd
can you type
$ source activate new-nim-falcon
$ python -m pypeflow
and tell me what version it says you have installed
I get this - should I load pypeflow? python -m pypeflow /home/ychiari/.conda/envs/new-nim-falcon/bin/python: No module named pypeflow.main; 'pypeflow' is a package and cannot be directly executed
The people at the cluster actually just installed pypeflow two days ago and this may explain why now it doesn't work anymore. I have asked them which version they installed. If you know what I should type to figure it out, please let me know because the code you wrote gave me the error above. Thanks
pypeflow should have been installed during the conda environment creation process:
conda create -n pba pb-assembly nim-falcon==3.0.2
So I'm guessing something went wrong if not.
What do you get if you type
(pba) $ conda list
and
(pba) $ python -m pip freeze
for example I see this:
(pba) $ conda list
# packages in environment at /home/UNIXHOME/gconcepcion/miniconda3/envs/pba:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
bedtools 2.30.0 h468198e_3 bioconda
blasr 5.3.5 0 bioconda
bwa 0.7.17 h7132678_9 bioconda
bzip2 1.0.8 h7f98852_4 conda-forge
c-ares 1.18.1 h7f98852_0 conda-forge
ca-certificates 2022.12.7 ha878542_0 conda-forge
curl 7.87.0 h6312ad2_0 conda-forge
falcon-kit 1.8.1 pypi_0 pypi
falcon-phase 1.2.0 pypi_0 pypi
falcon-unzip 1.3.7 pypi_0 pypi
future 0.18.3 pyhd8ed1ab_0 conda-forge
htslib 1.10.2 hd3b49d5_1 bioconda
k8 0.2.5 hd03093a_2 bioconda
keyutils 1.6.1 h166bdaf_0 conda-forge
krb5 1.20.1 hf9c8cef_0 conda-forge
ld_impl_linux-64 2.40 h41732ed_0 conda-forge
libblas 3.9.0 16_linux64_openblas conda-forge
libcblas 3.9.0 16_linux64_openblas conda-forge
libcurl 7.87.0 h6312ad2_0 conda-forge
libdeflate 1.6 h516909a_0 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 h516909a_1 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libgfortran-ng 12.2.0 h69a702a_19 conda-forge
libgfortran5 12.2.0 h337968e_19 conda-forge
libgomp 12.2.0 h65d4601_19 conda-forge
liblapack 3.9.0 16_linux64_openblas conda-forge
libnghttp2 1.51.0 hdcd2b5c_0 conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libopenblas 0.3.21 pthreads_h78a6416_3 conda-forge
libsqlite 3.40.0 h753d276_0 conda-forge
libssh2 1.10.0 haa6b8db_3 conda-forge
libstdcxx-ng 12.2.0 h46fd767_19 conda-forge
libuuid 2.32.1 h7f98852_1000 conda-forge
libzlib 1.2.13 h166bdaf_4 conda-forge
minimap2 2.24 h7132678_1 bioconda
mummer4 4.0.0rc1 pl5321h87f3376_4 bioconda
ncurses 6.3 h27087fc_1 conda-forge
networkx 3.0 pyhd8ed1ab_0 conda-forge
nim-falcon 3.0.2 h18d090a_1 bioconda
numpy 1.24.2 py38h10c12cc_0 conda-forge
openssl 1.1.1t h0b41bf4_0 conda-forge
pb-assembly 0.0.8 hdfd78af_1 bioconda
pb-dazzler 0.0.1 hec16e2b_2 bioconda
pb-falcon 2.2.4 py38h1bd3507_1 bioconda
pb-falcon-phase 0.1.0 h8e334b0_1 bioconda
pbgcpp 2.0.2 h9ee0642_1 bioconda
pbmm2 1.10.0 h9ee0642_0 bioconda
pcre 8.45 h9c3ff4c_0 conda-forge
perl 5.32.1 2_h7f98852_perl5 conda-forge
pip 23.0.1 pyhd8ed1ab_0 conda-forge
pysam 0.16.0.1 py38hbdc2ae9_1 bioconda
python 3.8.15 h257c98d_0_cpython conda-forge
python-edlib 1.3.9 py38h4a32c8e_1 bioconda
python-intervaltree 3.1.0 pyh864c0ab_0 bioconda
python-msgpack 0.6.1 py38h4a32c8e_5 bioconda
python-sortedcontainers 2.4.0 pyh5e36f6f_0 bioconda
python_abi 3.8 3_cp38 conda-forge
racon 1.5.0 h7ff8a90_0 bioconda
readline 8.1.2 h0f457ee_0 conda-forge
samtools 1.6 hcd7b337_9 bioconda
setuptools 67.5.1 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h27826a3_0 conda-forge
wheel 0.38.4 pyhd8ed1ab_0 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
zlib 1.2.13 h166bdaf_4 conda-forge
(pba) $ python -m pip freeze
coloredlogs==15.0.1
colormath==3.0.0
commonmark==0.9.1
edlib==1.3.9
falcon-kit==1.8.1
falcon-phase==1.2.0
falcon-unzip==1.3.7
future @ file:///home/conda/feedstock_root/build_artifacts/future_1673596611778/work
humanfriendly==10.0
importlib-metadata==4.8.2
intervaltree==3.1.0
lzstring==1.0.4
Markdown==3.3.6
msgpack==0.6.1
multiqc==1.11
networkx==2.6.3
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1675642515540/work
-e git+https://github.com/open2c/pairtools.git@254695a1ae8ef23741cc7d43f6b86cca2f909a97#egg=pairtools
pypeflow==2.3.0
pysam==0.16.0.1
rich==10.15.2
simplejson==3.17.6
sortedcontainers==2.4.0
spectra==0.0.11
conda list gives me this pypeflow 0.0.1 pypi_0 pypi
the python -m pip freeze gives me this
alabaster==0.7.12 anaconda-client==1.7.2 anaconda-navigator==1.10.0 anaconda-project==0.8.3 argh==0.26.2 argon2-cffi @ file:///tmp/build/80754af9/argon2-cffi_1596828493937/work asn1crypto @ file:///tmp/build/80754af9/asn1crypto_1596577642040/work astroid @ file:///tmp/build/80754af9/astroid_1592495912941/work astropy==4.0.2 async-generator==1.10 atomicwrites==1.4.0 attrs @ file:///tmp/build/80754af9/attrs_1604765588209/work autopep8 @ file:///tmp/build/80754af9/autopep8_1596578164842/work Babel @ file:///tmp/build/80754af9/babel_1605108370292/work backcall==0.2.0 backports.functools-lru-cache @ file:///tmp/build/80754af9/backports.functools_lru_cache_1618170165463/work backports.shutil-get-terminal-size==1.0.0 backports.tempfile @ file:///home/linux1/recipes/ci/backports.tempfile_1610991236607/work backports.weakref==1.0.post1 beautifulsoup4 @ file:///tmp/build/80754af9/beautifulsoup4_1601924105527/work bitarray @ file:///tmp/build/80754af9/bitarray_1605065113847/work bkcharts==0.2 bleach @ file:///tmp/build/80754af9/bleach_1600439572647/work bokeh @ file:///tmp/build/80754af9/bokeh_1603297833684/work boto==2.49.0 Bottleneck==1.3.2 brotlipy==0.7.0 certifi==2020.6.20 cffi @ file:///tmp/build/80754af9/cffi_1600699146221/work chardet==3.0.4 click==7.1.2 cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1598884132938/work clyent==1.2.2 colorama @ file:///tmp/build/80754af9/colorama_1603211150991/work conda==4.14.0 conda-build==3.20.5 conda-package-handling @ file:///tmp/build/80754af9/conda-package-handling_1649087926789/work conda-verify==3.4.2 contextlib2==0.6.0.post1 cryptography @ file:///tmp/build/80754af9/cryptography_1601046815590/work cycler==0.10.0 cytoolz==0.11.0 dask @ file:///tmp/build/80754af9/dask-core_1602083700509/work decorator==4.4.2 defusedxml==0.6.0 diff-match-patch @ file:///tmp/build/80754af9/diff-match-patch_1594828741838/work distributed @ file:///tmp/build/80754af9/distributed_1605066520644/work docutils==0.16 entrypoints==0.3 et-xmlfile==1.0.1 fastcache==1.1.0 filelock==3.0.12 flake8 @ file:///tmp/build/80754af9/flake8_1601911421857/work Flask==1.1.2 future==0.18.2 gevent @ file:///tmp/build/80754af9/gevent_1601397537062/work glob2==0.7 gmpy2==2.0.8 greenlet @ file:///tmp/build/80754af9/greenlet_1600874013538/work HeapDict==1.0.1 html5lib @ file:///tmp/build/80754af9/html5lib_1593446221756/work idna @ file:///tmp/build/80754af9/idna_1593446292537/work imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work imagesize==1.2.0 iniconfig @ file:///tmp/build/80754af9/iniconfig_1602780191262/work intervaltree @ file:///tmp/build/80754af9/intervaltree_1598376443606/work ipykernel @ file:///tmp/build/80754af9/ipykernel_1596207638929/work/dist/ipykernel-5.3.4-py3-none-any.whl ipython @ file:///tmp/build/80754af9/ipython_1604101197014/work ipython_genutils==0.2.0 ipywidgets @ file:///tmp/build/80754af9/ipywidgets_1601490159889/work isort @ file:///tmp/build/80754af9/isort_1602603989581/work itsdangerous==1.1.0 jdcal==1.4.1 jedi @ file:///tmp/build/80754af9/jedi_1592841866100/work jeepney @ file:///tmp/build/80754af9/jeepney_1605069705079/work Jinja2==2.11.2 joblib @ file:///tmp/build/80754af9/joblib_1601912903842/work json5==0.9.5 jsonschema @ file:///tmp/build/80754af9/jsonschema_1602607155483/work jupyter==1.0.0 jupyter-client @ file:///tmp/build/80754af9/jupyter_client_1601311786391/work jupyter-console @ file:///tmp/build/80754af9/jupyter_console_1598884538475/work jupyter-core==4.6.3 jupyterlab==2.2.6 jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work jupyterlab-server @ file:///tmp/build/80754af9/jupyterlab_server_1594164409481/work keyring @ file:///tmp/build/80754af9/keyring_1601490835422/work kiwisolver @ file:///tmp/build/80754af9/kiwisolver_1604014535162/work lazy-object-proxy==1.4.3 libarchive-c==2.9 llvmlite==0.34.0 locket==0.2.0 lxml @ file:///tmp/build/80754af9/lxml_1603216285000/work MarkupSafe==1.1.1 mccabe==0.6.1 mistune==0.8.4 mkl-fft==1.2.0 mkl-random==1.1.1 mock==4.0.2 more-itertools @ file:///tmp/build/80754af9/more-itertools_1605111547926/work mpmath==1.1.0 msgpack==1.0.0 multipledispatch==0.6.0 navigator-updater==0.2.1 nb-conda-kernels @ file:///tmp/build/80754af9/nb_conda_kernels_1606775941989/work nbclient @ file:///tmp/build/80754af9/nbclient_1602783176460/work nbconvert @ file:///tmp/build/80754af9/nbconvert_1601914830498/work nbformat @ file:///tmp/build/80754af9/nbformat_1602783287752/work nest-asyncio @ file:///tmp/build/80754af9/nest-asyncio_1605115881283/work networkx @ file:///tmp/build/80754af9/networkx_1598376031484/work nltk @ file:///tmp/build/80754af9/nltk_1592496090529/work nose==1.3.7 notebook @ file:///tmp/build/80754af9/notebook_1601501575118/work numba @ file:///tmp/build/80754af9/numba_1600100669015/work numexpr==2.7.1 numpydoc @ file:///tmp/build/80754af9/numpydoc_1605117425582/work olefile==0.46 openpyxl @ file:///tmp/build/80754af9/openpyxl_1598113097404/work packaging==20.4 pandocfilters @ file:///tmp/build/80754af9/pandocfilters_1605120460739/work parso==0.7.0 partd==1.1.0 path @ file:///tmp/build/80754af9/path_1598376507494/work pathlib2==2.3.5 pathtools==0.1.2 patsy==0.5.1 pep8==1.7.1 pexpect==4.8.0 pickleshare==0.7.5 Pillow @ file:///tmp/build/80754af9/pillow_1603822255246/work pkginfo==1.6.1 pluggy==0.13.1 ply==3.11 prometheus-client==0.8.0 prompt-toolkit @ file:///tmp/build/80754af9/prompt-toolkit_1602688806899/work psutil @ file:///tmp/build/80754af9/psutil_1598370257551/work ptyprocess==0.6.0 py @ file:///tmp/build/80754af9/py_1593446248552/work pycodestyle==2.6.0 pycosat==0.6.3 pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work pycurl==7.43.0.6 pydocstyle @ file:///tmp/build/80754af9/pydocstyle_1598885001695/work pyflakes==2.2.0 Pygments @ file:///tmp/build/80754af9/pygments_1604103097372/work pylint @ file:///tmp/build/80754af9/pylint_1598623985952/work pyodbc===4.0.0-unsupported pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1594392929924/work pyparsing==2.4.7 pypeflow==0.0.1 pyrsistent @ file:///tmp/build/80754af9/pyrsistent_1600141720057/work PySocks==1.7.1 pytest==0.0.0 python-dateutil==2.8.1 python-jsonrpc-server @ file:///tmp/build/80754af9/python-jsonrpc-server_1600278539111/work python-language-server @ file:///tmp/build/80754af9/python-language-server_1600454544709/work pytz==2020.1 PyWavelets @ file:///tmp/build/80754af9/pywavelets_1601658317819/work pyxdg @ file:///tmp/build/80754af9/pyxdg_1603822279816/work PyYAML==5.3.1 QDarkStyle==2.8.1 QtAwesome @ file:///tmp/build/80754af9/qtawesome_1602272867890/work qtconsole @ file:///tmp/build/80754af9/qtconsole_1600870028330/work QtPy==1.9.0 regex @ file:///tmp/build/80754af9/regex_1602786672676/work requests @ file:///tmp/build/80754af9/requests_1592841827918/work rope @ file:///tmp/build/80754af9/rope_1602264064449/work Rtree==0.9.4 ruamel_yaml==0.15.87 seaborn @ file:///tmp/build/80754af9/seaborn_1600553570093/work SecretStorage==3.1.2 Send2Trash==1.5.0 simplegeneric==0.8.1 singledispatch @ file:///tmp/build/80754af9/singledispatch_1602523705405/work sip==4.19.13 six @ file:///tmp/build/80754af9/six_1605205327372/work snowballstemmer==2.0.0 sortedcollections==1.2.1 sortedcontainers==2.2.2 soupsieve==2.0.1 Sphinx @ file:///tmp/build/80754af9/sphinx_1597428793432/work sphinxcontrib-applehelp==1.0.2 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==1.0.3 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.4 sphinxcontrib-websupport @ file:///tmp/build/80754af9/sphinxcontrib-websupport_1597081412696/work spyder @ file:///tmp/build/80754af9/spyder_1599056981321/work spyder-kernels @ file:///tmp/build/80754af9/spyder-kernels_1599056754858/work SQLAlchemy @ file:///tmp/build/80754af9/sqlalchemy_1603397987316/work statsmodels @ file:///tmp/build/80754af9/statsmodels_1602280205159/work sympy @ file:///tmp/build/80754af9/sympy_1605119542615/work tables==3.6.1 tblib @ file:///tmp/build/80754af9/tblib_1597928476713/work terminado==0.9.1 testpath==0.4.4 threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl toml @ file:///tmp/build/80754af9/toml_1592853716807/work toolz @ file:///tmp/build/80754af9/toolz_1601054250827/work tornado==6.0.4 traitlets @ file:///tmp/build/80754af9/traitlets_1602787416690/work ujson @ file:///tmp/build/80754af9/ujson_1602523317881/work unicodecsv==0.14.1 urllib3 @ file:///tmp/build/80754af9/urllib3_1603305693037/work watchdog @ file:///tmp/build/80754af9/watchdog_1593447344699/work wcwidth @ file:///tmp/build/80754af9/wcwidth_1593447189090/work webencodings==0.5.1 Werkzeug==1.0.1 widgetsnbextension==3.5.1 wrapt==1.11.2 wurlitzer @ file:///tmp/build/80754af9/wurlitzer_1594753850195/work xlrd==1.2.0 XlsxWriter @ file:///tmp/build/80754af9/xlsxwriter_1602692860603/work xlwt==1.3.0 xmltodict @ file:///Users/ktietz/demo/mc3/conda-bld/xmltodict_1629301980723/work yapf @ file:///tmp/build/80754af9/yapf_1593528177422/work zict==2.0.0 zipp @ file:///tmp/build/80754af9/zipp_1604001098328/work zope.event==4.5.0 zope.interface @ file:///tmp/build/80754af9/zope.interface_1602002420968/work
I have to run the nim-falcon version from a conda environment, because it doesn't install on the "normal" conda env. I get an error conda create -n pba pb-assembly nim-falcon==3.0.2 Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
Current channels:
To search for alternate channels that may provide the conda package you're looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
should I search for the pypeflow in the env I am running the newer version of nim-falcon?
Did you run
(pba) $ python -m pip freeze
from within your new-nim-falcon
conda environment? because that should be a much shorter list, it looks like you gave me a list of your system python packages
What was the rest of the output from (pba) $ conda list
?
If pypeflow 0.0.1 is the only installed package, then you don't have pb-assembly (falcon) installed at all
I need the info of the packages that are installed in your new-nim-falcon
conda environment, or else I can't troubleshoot what's going on here
yes pypeflow 0.0.1 is the only installed package I think
(new-nim-falcon) [ychiari@hopper1 repeatm]$ python -m pip freeze alabaster==0.7.12 anaconda-client==1.7.2 anaconda-navigator==1.10.0 anaconda-project==0.8.3 argh==0.26.2 argon2-cffi @ file:///tmp/build/80754af9/argon2-cffi_1596828493937/work asn1crypto @ file:///tmp/build/80754af9/asn1crypto_1596577642040/work astroid @ file:///tmp/build/80754af9/astroid_1592495912941/work astropy==4.0.2 async-generator==1.10 atomicwrites==1.4.0 attrs @ file:///tmp/build/80754af9/attrs_1604765588209/work autopep8 @ file:///tmp/build/80754af9/autopep8_1596578164842/work Babel @ file:///tmp/build/80754af9/babel_1605108370292/work backcall==0.2.0 backports.functools-lru-cache @ file:///tmp/build/80754af9/backports.functools_lru_cache_1618170165463/work backports.shutil-get-terminal-size==1.0.0 backports.tempfile @ file:///home/linux1/recipes/ci/backports.tempfile_1610991236607/work backports.weakref==1.0.post1 beautifulsoup4 @ file:///tmp/build/80754af9/beautifulsoup4_1601924105527/work bitarray @ file:///tmp/build/80754af9/bitarray_1605065113847/work bkcharts==0.2 bleach @ file:///tmp/build/80754af9/bleach_1600439572647/work bokeh @ file:///tmp/build/80754af9/bokeh_1603297833684/work boto==2.49.0 Bottleneck==1.3.2 brotlipy==0.7.0 certifi==2020.6.20 cffi @ file:///tmp/build/80754af9/cffi_1600699146221/work chardet==3.0.4 click==7.1.2 cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1598884132938/work clyent==1.2.2 colorama @ file:///tmp/build/80754af9/colorama_1603211150991/work conda==4.14.0 conda-build==3.20.5 conda-package-handling @ file:///tmp/build/80754af9/conda-package-handling_1649087926789/work conda-verify==3.4.2 contextlib2==0.6.0.post1 cryptography @ file:///tmp/build/80754af9/cryptography_1601046815590/work cycler==0.10.0 cytoolz==0.11.0 dask @ file:///tmp/build/80754af9/dask-core_1602083700509/work decorator==4.4.2 defusedxml==0.6.0 diff-match-patch @ file:///tmp/build/80754af9/diff-match-patch_1594828741838/work distributed @ file:///tmp/build/80754af9/distributed_1605066520644/work docutils==0.16 edlib==1.3.9 entrypoints==0.3 et-xmlfile==1.0.1 falcon-kit==1.8.1 falcon-phase==1.2.0 falcon-unzip==1.3.7 fastcache==1.1.0 filelock==3.0.12 flake8 @ file:///tmp/build/80754af9/flake8_1601911421857/work Flask==1.1.2 future==0.18.2 gevent @ file:///tmp/build/80754af9/gevent_1601397537062/work glob2==0.7 gmpy2==2.0.8 greenlet @ file:///tmp/build/80754af9/greenlet_1600874013538/work HeapDict==1.0.1 html5lib @ file:///tmp/build/80754af9/html5lib_1593446221756/work idna @ file:///tmp/build/80754af9/idna_1593446292537/work imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work imagesize==1.2.0 iniconfig @ file:///tmp/build/80754af9/iniconfig_1602780191262/work intervaltree @ file:///tmp/build/80754af9/intervaltree_1598376443606/work ipykernel @ file:///tmp/build/80754af9/ipykernel_1596207638929/work/dist/ipykernel-5.3.4-py3-none-any.whl ipython @ file:///tmp/build/80754af9/ipython_1604101197014/work ipython_genutils==0.2.0 ipywidgets @ file:///tmp/build/80754af9/ipywidgets_1601490159889/work isort @ file:///tmp/build/80754af9/isort_1602603989581/work itsdangerous==1.1.0 jdcal==1.4.1 jedi @ file:///tmp/build/80754af9/jedi_1592841866100/work jeepney @ file:///tmp/build/80754af9/jeepney_1605069705079/work Jinja2==2.11.2 joblib @ file:///tmp/build/80754af9/joblib_1601912903842/work json5==0.9.5 jsonschema @ file:///tmp/build/80754af9/jsonschema_1602607155483/work jupyter==1.0.0 jupyter-client @ file:///tmp/build/80754af9/jupyter_client_1601311786391/work jupyter-console @ file:///tmp/build/80754af9/jupyter_console_1598884538475/work jupyter-core==4.6.3 jupyterlab==2.2.6 jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work jupyterlab-server @ file:///tmp/build/80754af9/jupyterlab_server_1594164409481/work keyring @ file:///tmp/build/80754af9/keyring_1601490835422/work kiwisolver @ file:///tmp/build/80754af9/kiwisolver_1604014535162/work lazy-object-proxy==1.4.3 libarchive-c==2.9 llvmlite==0.34.0 locket==0.2.0 lxml @ file:///tmp/build/80754af9/lxml_1603216285000/work MarkupSafe==1.1.1 mccabe==0.6.1 mistune==0.8.4 mkl-fft==1.2.0 mkl-random==1.1.1 mock==4.0.2 more-itertools @ file:///tmp/build/80754af9/more-itertools_1605111547926/work mpmath==1.1.0 msgpack==1.0.0 multipledispatch==0.6.0 navigator-updater==0.2.1 nb-conda-kernels @ file:///tmp/build/80754af9/nb_conda_kernels_1606775941989/work nbclient @ file:///tmp/build/80754af9/nbclient_1602783176460/work nbconvert @ file:///tmp/build/80754af9/nbconvert_1601914830498/work nbformat @ file:///tmp/build/80754af9/nbformat_1602783287752/work nest-asyncio @ file:///tmp/build/80754af9/nest-asyncio_1605115881283/work networkx @ file:///tmp/build/80754af9/networkx_1598376031484/work nltk @ file:///tmp/build/80754af9/nltk_1592496090529/work nose==1.3.7 notebook @ file:///tmp/build/80754af9/notebook_1601501575118/work numba @ file:///tmp/build/80754af9/numba_1600100669015/work numexpr==2.7.1 numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1675642515540/work numpydoc @ file:///tmp/build/80754af9/numpydoc_1605117425582/work olefile==0.46 openpyxl @ file:///tmp/build/80754af9/openpyxl_1598113097404/work packaging==20.4 pandocfilters @ file:///tmp/build/80754af9/pandocfilters_1605120460739/work parso==0.7.0 partd==1.1.0 path @ file:///tmp/build/80754af9/path_1598376507494/work pathlib2==2.3.5 pathtools==0.1.2 patsy==0.5.1 pep8==1.7.1 pexpect==4.8.0 pickleshare==0.7.5 Pillow @ file:///tmp/build/80754af9/pillow_1603822255246/work pkginfo==1.6.1 pluggy==0.13.1 ply==3.11 prometheus-client==0.8.0 prompt-toolkit @ file:///tmp/build/80754af9/prompt-toolkit_1602688806899/work psutil @ file:///tmp/build/80754af9/psutil_1598370257551/work ptyprocess==0.6.0 py @ file:///tmp/build/80754af9/py_1593446248552/work pycodestyle==2.6.0 pycosat==0.6.3 pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work pycurl==7.43.0.6 pydocstyle @ file:///tmp/build/80754af9/pydocstyle_1598885001695/work pyflakes==2.2.0 Pygments @ file:///tmp/build/80754af9/pygments_1604103097372/work pylint @ file:///tmp/build/80754af9/pylint_1598623985952/work pyodbc===4.0.0-unsupported pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1594392929924/work pyparsing==2.4.7 pypeflow==0.0.1 pyrsistent @ file:///tmp/build/80754af9/pyrsistent_1600141720057/work pysam==0.16.0.1 PySocks==1.7.1 pytest==0.0.0 python-dateutil==2.8.1 python-jsonrpc-server @ file:///tmp/build/80754af9/python-jsonrpc-server_1600278539111/work python-language-server @ file:///tmp/build/80754af9/python-language-server_1600454544709/work pytz==2020.1 PyWavelets @ file:///tmp/build/80754af9/pywavelets_1601658317819/work pyxdg @ file:///tmp/build/80754af9/pyxdg_1603822279816/work PyYAML==5.3.1 QDarkStyle==2.8.1 QtAwesome @ file:///tmp/build/80754af9/qtawesome_1602272867890/work qtconsole @ file:///tmp/build/80754af9/qtconsole_1600870028330/work QtPy==1.9.0 regex @ file:///tmp/build/80754af9/regex_1602786672676/work requests @ file:///tmp/build/80754af9/requests_1592841827918/work rope @ file:///tmp/build/80754af9/rope_1602264064449/work Rtree==0.9.4 ruamel_yaml==0.15.87 seaborn @ file:///tmp/build/80754af9/seaborn_1600553570093/work SecretStorage==3.1.2 Send2Trash==1.5.0 simplegeneric==0.8.1 singledispatch @ file:///tmp/build/80754af9/singledispatch_1602523705405/work sip==4.19.13 six @ file:///tmp/build/80754af9/six_1605205327372/work snowballstemmer==2.0.0 sortedcollections==1.2.1 sortedcontainers==2.2.2 soupsieve==2.0.1 Sphinx @ file:///tmp/build/80754af9/sphinx_1597428793432/work sphinxcontrib-applehelp==1.0.2 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==1.0.3 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.4 sphinxcontrib-websupport @ file:///tmp/build/80754af9/sphinxcontrib-websupport_1597081412696/work spyder @ file:///tmp/build/80754af9/spyder_1599056981321/work spyder-kernels @ file:///tmp/build/80754af9/spyder-kernels_1599056754858/work SQLAlchemy @ file:///tmp/build/80754af9/sqlalchemy_1603397987316/work statsmodels @ file:///tmp/build/80754af9/statsmodels_1602280205159/work sympy @ file:///tmp/build/80754af9/sympy_1605119542615/work tables==3.6.1 tblib @ file:///tmp/build/80754af9/tblib_1597928476713/work terminado==0.9.1 testpath==0.4.4 threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl toml @ file:///tmp/build/80754af9/toml_1592853716807/work toolz @ file:///tmp/build/80754af9/toolz_1601054250827/work tornado==6.0.4 traitlets @ file:///tmp/build/80754af9/traitlets_1602787416690/work ujson @ file:///tmp/build/80754af9/ujson_1602523317881/work unicodecsv==0.14.1 urllib3 @ file:///tmp/build/80754af9/urllib3_1603305693037/work watchdog @ file:///tmp/build/80754af9/watchdog_1593447344699/work wcwidth @ file:///tmp/build/80754af9/wcwidth_1593447189090/work webencodings==0.5.1 Werkzeug==1.0.1 widgetsnbextension==3.5.1 wrapt==1.11.2 wurlitzer @ file:///tmp/build/80754af9/wurlitzer_1594753850195/work xlrd==1.2.0 XlsxWriter @ file:///tmp/build/80754af9/xlsxwriter_1602692860603/work xlwt==1.3.0 xmltodict @ file:///Users/ktietz/demo/mc3/conda-bld/xmltodict_1629301980723/work yapf @ file:///tmp/build/80754af9/yapf_1593528177422/work zict==2.0.0 zipp @ file:///tmp/build/80754af9/zipp_1604001098328/work zope.event==4.5.0 zope.interface @ file:///tmp/build/80754af9/zope.interface_1602002420968/work
The people at the cluster told me to install it like this conda create -c bioconda -c conda-forge --name=new-nim-falcon pb-assembly nim-falcon==3.0.2
Operating system Which operating system and version are you using? I am using a Windows 64
Package name Which package / tool is causing the problem? Which version are you using, use
tool --version
. Have you updated to the latest versionconda update package
? Have you updated the complete env by runningconda update --all
? Have you ensured that your channel priorities are set up according to the bioconda recommendations at https://bioconda.github.io/#set-up-channels?I need feedback regarding falcon for a PacBio assembly
Conda environment What is the result of
conda list
? (Try to paste that between triple backticks.)Describe the bug A clear and concise description of what the bug is.
Error message Paste the error message / stack. After the step 0 of the falcon assembly, the run gets killed because out of memory. We are running it on the cluster on a node with 4TB. The genome size should be around 2.3Gb. I have heard of others assembling much larger genomes using falcon and not needing nearly 4TbB. The data are Pacbio CLR
To Reproduce Steps to reproduce the behavior. Providing a minimal test dataset on which we can reproduce the behavior will generally lead to quicker turnaround time!
Expected behavior A clear and concise description of what you expected to happen.