Open abhirajnair2002 opened 2 months ago
Can you attach the syslog file inside of the output project directory?
Hi! Please find the syslog file: syslog.zip
It is not very informative, but I suspect that
a) You are running out of memory. How much available RAM do you have in your WSL VM? b) There is some other issue coming from running SqueezeMeta inside WSL. I don't think this is the case, at least if you are using WSL2.
Also if you are running this inside WSL2 (I'm just assuming you are doing this since I see /mnt/d/
in some of the paths in the syslog) be aware of the discussion in #695. However is see that your project directory is located inside your home (instead of inside one of the Windows partitions) so you should be safe.
I just realized that your syslog mentions that you are running MEGAHIT, but the command failing in your screenshot was spades.py
. Is this the right file?
Also if using WSL2, make sure that the SqueezeMeta database are stored inside the WSL partition (instead of the Windows partitions mounted in /mnt/c
, /mnt/d
etc). Otherwise some steps of the pipeline may be very slow.
Hi there,
i had the same issue.
SqueezeMeta.pl -m coassembly -p anoxic_Aug -s ./sample.list -f /users/zw00847/zw00847/2-Trimmed -a spades -binners concoct,metabat2 -b 32 -t 20. I tried memory = 64G, 128G, 256G. all gave me the same error.
according to your reply, do you mean I need to attach the syslog file inside my output project directory -- anoxic_Aug?
thanks for your time.
Hey! The error I was receiving happened to be due to the lack of RAM allotted to the WSL, I happened to have access to a server with 256gb RAM and that seemed to fix the problem for me.
The syslog file was just for their reference and didn’t have any use in the actual data analysis.
good luck!
@ZhufangWang yes, that would be the file. But it is likely that you're also running out of memory
@abhirajnair2002 @fpusan Hi both, thanks for getting back. in this case, do i need to go to 512g? I submitted several jobs to the cluster, and some of the jobs are running well, the other jobs just failed with this issue.
Then you may need to increase the memory for those jobs in particular, yes
Also, Spades require much more memory. Try using megahit instead.
The error given in the screenshot is displayed each time the pipeline is ran, please help