Closed kushalsuryamohan closed 4 years ago
Hi Kushal,
It looks like your job is being killed due to lack of memory. You haven't included how many threads and RAM you are requesting but you should ask for around 500Gb on your 'himem' queue. The master job will chunk your query and target files, then run KMatch to determine exact matches to use as seeds. KMatch jobs will be spawned as separate SLURM jobs, each requesting 2 threads and 100Gb of RAM by default. You might need to increase this for large genomes using the -km_mem
parameter. After KMatch, slaves will be spawned as SLURM jobs to compare the chunks and by default only 1 slave is spawned. You could try running 10 slaves, each requesting 8 threads and 500Gb of memory, so -slaves 10 -threads 8 -sl_mem 500
. The parameter choice here depends on your hardware. Our HPC is configured in blocks each with 8 cores and 360Gb of local memory so we tend to set the parameters to run one slave per block (ie. 8 threads and 360Gb) but this is for optimum speed, other settings will work as long as you don't run out of memory. I hope this helps.
Best wishes, Jon
Hello, I am a new user trying to generate synteny between two 1.5 Gb genomes. I installed Satsuma2 and ran the test script locally but I am trying to launch the actual synteny analysis on a SLURM cluster.
Here is my satsuma_run.sh script where I uncommented the code to launch Satsuma on SLURM:
And here is my SatsumaSynteny2 command -
I am not sure what the problem is but here is the output I get after submitting the above command -
I would really appreciate it if I can get this resolved. Attached is the compute resource details of the cluster I have access to. Please advise if my parameters are incorrect. Many thanks!