headmyshoulder / odeint-v2

odeint - solving ordinary differential equations in c++ v2
http://headmyshoulder.github.com/odeint-v2/
Other
340 stars 101 forks source link

Observer for parallel code example #157

Closed pranavcode closed 8 years ago

pranavcode commented 9 years ago

Hi,

The parallel code examples (MPI, OpenMP and CUDA) gives out the integration result at the end of integration, and does not demonstrate observation of intermediate states. The code delivers no value if intermediate states are not observed or recorded.

As the codes involve split() and unsplit() calls surrounding the integration call, and this restricts me from observing the intermediate steps (at least thats what I can see, may be I am not able to figure out a better way, please help!).

To put this into perspective. Lets say I have a Neuronal simulation that uses integrate_n_steps(), starting at time 0.0 and has 1000 steps with step size 0.05. I will be more interested in the state vector (say x) at each step, to see the dynamics and state changed during the simulation, rather than acquiring the integration result after the last step.

Is there a way intermediate states can be observed?

mariomulansky commented 9 years ago

You can provide an observer in the integrate_n_steps method that will be called after every time step with the current state and time of the integration. The state is given as an mpi_state and you should process it in the same way as you do in your rhs function. That means you should make sure your observation code is also executed in a distributed way using mpi just the same as the rhs.

pranavcode commented 9 years ago

Just to clarify, the split function will split the larger state vector into smaller chunks depending on the number of processes spawned during execution, is this right? If so, every process will have their own respective chunk and they will have to communicate it to master process to be observed? If my understanding is right, I will go ahead and write that parallel observer. Let me know if I am missing anything.

mariomulansky commented 9 years ago

Yes that's right. Your data is distributed between the processes. In general you then also want the observer to be be as local as possible, too much communication will kill your performance.

Btw: if you have a somewhat comprehensive MPI code we would be happy to use it as an example to make things easier for others to use the MPI codes.

pranavcode commented 9 years ago

Thanks @mariomulansky, you have been of so much help. I would surely share all my parallel codes I will eventually write, MPI, OpenMP and CUDA for review and you may include them as an example.

mariomulansky commented 8 years ago

I consider this closed then. If you have example codes please don't hesitate sharing.