Closed shahpnmlab closed 4 years ago
What happens if you reduce the number of MPI per node to exclude the possibility of running out of memory?
P.S. We no longer support RELION 3.0.x. Please update to 3.1.0.
Hi Takanori, I have tried running it on fewer MPIs like 9,7,5 with different threads like 1, 2 & 4. The problem persists no matter which version of relion use including the latest stable release (3.1.0-commit-1e738e).
Setting --maxsig 2000 didnt help either...
Using --maxsig
to solve memory problems is valid only for GPU.
fewer MPIs like 9,7,5
Please try 1 MPI per node.
Anyway, I cannot help because I am not involved with subtomo functionality at all. @joton might be able to help.
No worries! Thanks for looping @joton in...
Thanks @biochem-fan, we're now in contact by email as well.
I don't remember to have such a problem during angular errors estimation either when I was using 3.0 or now in 3.1.0. Anyway. let's try first to repeat the process using Relion 3.1.0. If the problem persists it has to be somehow related to @shahpnmlab pipeline.
Describe your problem
I am trying to run 3D classification and refinements on sub volumes using relion 3.0.7 compiled for use on our CPU based cluster. I have extracted the volumes in 64px,128px and 256px boxes. I am able to run 3D classification on the 64px xub-volumes but not on the 128,256px boxes and 3D auto-refine fails on all 3 boxes. I have modified the particles.star file for the 64px boxes to include stray 4 particles in a single group (refer to the error message and the suggested fix, below). But despite my best efforts I am unable to execute these processes successfully.
Environment:
Dataset:
Job options:
note.txt
in the job directory):3D classification with the 128px/256px sub-volumes
Error message:
Please cite the full error message as the example below.
3D auto refine with 64px box
(i have tried with 128cores and 4 threads and it is the same error.