baudren / montepython_public

Public repository for the Monte Python Code
MIT License
65 stars 115 forks source link

Warning while running Montepython in parallel #21

Closed akhileshnautiyal closed 9 years ago

akhileshnautiyal commented 10 years ago

Hi I tried to run montepython in parallel using mpi4py.py in my cluster the program was running fine but and I got the following warning.

" An MPI process has executed an operation involving a call to the "fork()" system call to create a child process. Open MPI is currently operating in a condition that could result in memory corruption or other system errors; your MPI job may hang, crash, or produce silent data corruption. The use of fork() (or system() or other calls that create child processes) is strongly discouraged.

The process that invoked fork was:

Local host: satpura (PID 37516) MPI_COMM_WORLD rank: 0

If you are absolutely sure that your application will successfully and correctly survive a call to fork(), you may disable this warning by setting the mpi_warn_on_fork MCA parameter to 0. " Do I need to worry about this warning?

Thanks, Akhilesh

baudren commented 10 years ago

@akhileshnautiyal I see this error for the first time, but I did not experiment a lot with MPI. From browsing the web, it might be due to several things. Could you first try to run the code with this:

mpiexec -mca mpi_warn_on_fork 0 "normal monte python call"

if you use mpiexec, or the same with mpirun. This should suppress the warning if it comes from Monte Python itself. If the warning is still there, it might then be due to the library that you are using. Could you try another version of mpi library, like mpich2 ?

(information coming from there)

baudren commented 10 years ago

@akhileshnautiyal

any update on the situation? Can I close this issue?

akhileshnautiyal commented 10 years ago

HI, Actually I am out of station so couldn't find time to do it. You can close this issue for now. I will look at it again after 8th September. Thanks

baudren commented 9 years ago

Any update, @akhileshnautiyal ?

baudren commented 9 years ago

As the mpi implementation was now tested properly on a cluster, I would say that this issue can be closed. If you encounter more problems, please open a new one.