Closed pranavcode closed 9 years ago
Thanks for reporting this problem. This code is supposed to run without any modifications. Unfortunately, the parallel implementations are not tested as thoroughly as the rest of the library. I will try to reproduce the problem and see if I can find a solution.
I tried on my machine with gcc 4.8.2, intel 15.0.0 and clang 3.4 and I could not reproduce your problem.
My only suggestion is now to try to build the example using bjam, maybe this gives the correct results...
@mariomulansky, building the example with bjam
helped. Thanks.
Yet I am not sure of what is missing from the command line compilation, with mpic++
above? My build configuration is similar to Jamfile
, and still execution fails for more that one process.
Also, I tried on two other boxes, with clang 3.5
(on OS X 10.10.2, Ubuntu 14.04) and gcc 4.8.2
(on Ubuntu 14.04). Boost installations are done using bjam
. I used mpic++
. Example successfully ran for clang
, but failed for g++
. And bjam
worked on both the boxes.
make
, as it is already part of our project and I don’t want to switch to bjam
?I will try to spend some more time to come up with justifiable conclusions. Meanwhile, if, one can provide hint in this regard, it will be appreciated. Thanks!
I have updated the gist for Xeon E5-2680 box (gcc 4.8.2 and clang 3.5, Ubuntu 14.04).
if you compile with bjam -d2
you can see the exact compilation command used from bjam. maybe this helps to create an equivalent make file?
@mariomulansky, I will try doing that. Thanks.
Any thoughts on more than one process execution issue using mpic++? What was the execution environment you were not able to reproduce my error on?
I'm not sure what exactly you mean with "more than one process execution issue" now? Does everything work now if you compile with bjam, or are you still having problems even there?
My test system is a Kubuntu 14.04 with an Intel Core-i5 3210M
@mariomulansky, thanks for your patience and keeping this issue alive.
Compilation with bjam
is as smooth as butter. bjam -d2
does a wonderful job of showing a verbose output of all that it is doing in background. I am trying to replicate it with make
and it will take little more time than what I presumed. Till then it would be great if we can keep this issue live.
My bad on using "more than one process execution issue" wording. By that I meant, mpic++
compiles the code, but fails only when more than one process is involved. Here's what I think and I might be wrong, the build script that compiles the example code passes some compiler configuration flags that are missing from simple mpic++
compilation. And so in the case of compilation with simple mpic++
execution with more than one process fails. I am not sure how to validate this (yet). Any thoughts.
/cc @headmyshoulder @neapel Apart from the issue mentioned, the MPI example (including OpenMP and CUDA examples) gives out the result at the end of integration through all the steps and does not demonstrate observation of intermediate states. How would one go about implementing observer efficiently to report all the intermediate states?
It might be that the problem arises because some mpi libraries that are linked are compiled with specific flags, while your code is note if you simply use mpicc. With using bjam, however, it is sure that all binaries are compiled with compatible flags and no problem occurs.
@mariomulansky, Thanks. One has to take these compiler flags into consideration.
Hi,
The MPI example (also OpenMP and CUDA examples) gives out the result at the end of integration through all the steps and does not demonstrate observation of intermediate states. The code is delivers no value if intermediate states are not observed or recorded.
How would one go about implementing observer efficiently to report the intermediate states?
Closing this issue for satisfied resolution and will open a new issue with observer specific query. So that it will be addressed. Thanks.
Hey, This news is gonna amaze you, I'm telling you! Just check it out http://shicronkacu.iwqwproductions.com/aeihl Warmest regards, mario.mulansky@gmx.net
Hey,
This is what I've just read and it's something really new and interesting, you can read more at http://hyrdeslongi.africansview.com/aejtnb
Best regards, mario.mulansky@gmx.net
Hey friend,
Our friend have written to me a couple of days ago, they have a surprise for you, just take a look http://belief.calcyclingtours.com/aeklxda
Typos courtesy of my iPhone, mario.mulansky
Dear!
I've recently found that article on the web and the facts I had found in it were just shocking! Pleaseread it here http://line.walkiriaerodriguez.com/aewzvvq
Yours sincerely, mario.mulansky
Not sure if this is the correct place to report this (apologies and please divert me to correct location!).
I am trying to execute the example for MPI (Phase Chain) given here odeint-v2/examples/mpi/phase_chain.cpp. For more than 2 processes, it is giving a Segmentation Fault. As far as I understand, this can occur in case of undesired memory access and I investigated the code, but problem is not apparent.
The code I am trying is as given here odeint-v2/examples/mpi/phase_chain.cpp and here is failing execution (
mpirun
) with 2 or more processes - the error log.My setup:
Ubuntu 14.04 and Boost version 1.56.0
With all the Boost libraries installed under
/usr/lib/x86_64-linux-gnu/libboost_*
, the compilation is as follow:And successful execution for 1 process
But, for 2 or more processes it throws a Segmentation Fault.
Am I doing something wrong here? Is this program failing for anyone else, or is it just me? Am I expected to do changes to the program before compiling/running according to my environment?
Really appreciate the help.