I compiled OCEAN in my home directory on NERSC and ran it under another filesystem designated for job execution. After the self-consistent charge density is obtained, the program always fails when calling cut3d, with some MPI error message popping up.
I figured it is because NERSC doesn't allow execution from login nodes. In Line 358 of AbinitDriver.pl it is written:
which directly calls cut3d from $OCEAN_BIN. And looks like cut3d is not parallel, so adding "$para_prefix" doesn't solve the problem. What I did then is to add "srun -n 1" right before "$ENV{'OCEAN_BIN'}" so that cut3d is run on one core.
This glitch is specific to clusters that forbid job execution on login nodes.
This comes from Liang Li (ANL)
I compiled OCEAN in my home directory on NERSC and ran it under another filesystem designated for job execution. After the self-consistent charge density is obtained, the program always fails when calling cut3d, with some MPI error message popping up.
I figured it is because NERSC doesn't allow execution from login nodes. In Line 358 of AbinitDriver.pl it is written:
"system("$ENV{'OCEAN_BIN'}/cut3d < cut3d.in > cut3d.log 2> cut3d.err") == 0"
which directly calls cut3d from $OCEAN_BIN. And looks like cut3d is not parallel, so adding "$para_prefix" doesn't solve the problem. What I did then is to add "srun -n 1" right before "$ENV{'OCEAN_BIN'}" so that cut3d is run on one core.
This glitch is specific to clusters that forbid job execution on login nodes.