transient-haskell / transient

A full stack, reactive architecture for general purpose programming. Algebraic and monadically composable primitives for concurrency, parallelism, event handling, transactions, multithreading, Web, and distributed computing with complete de-inversion of control (No callbacks, no blocking, pure state)
MIT License
631 stars 28 forks source link

"mclustered" version on Hackage #7

Closed AlexeyRaga closed 8 years ago

AlexeyRaga commented 8 years ago

Could you please update Hackage to the version where clustered/mclustered issue is fixed?

agocorona commented 8 years ago

It was sent when i closed the issue.

Concerning the spurious messages in "slave" nodes, I think that i found it. But It will take some time to upload the update

2015-10-25 12:10 GMT+01:00 Alexey Raga notifications@github.com:

Could you please update Hackage to the version where clustered/mclustered issue is fixed?

— Reply to this email directly or view it on GitHub https://github.com/agocorona/transient/issues/7.

Alberto.

agocorona commented 8 years ago

Ah, ok you mean Hackage, not git. I will do it right now.

2015-10-25 17:07 GMT+01:00 Alberto G. Corona agocorona@gmail.com:

It was sent when i closed the issue.

Concerning the spurious messages in "slave" nodes, I think that i found it. But It will take some time to upload the update

2015-10-25 12:10 GMT+01:00 Alexey Raga notifications@github.com:

Could you please update Hackage to the version where clustered/mclustered issue is fixed?

— Reply to this email directly or view it on GitHub https://github.com/agocorona/transient/issues/7.

Alberto.

Alberto.

agocorona commented 8 years ago

SOLVED the spurious messages in remote nodes too. I uploaded it to git and hackage.

Can you confirm it?

AlexeyRaga commented 8 years ago

It looks like it works better now.

I see that now only the node where I call "start" is printing the results (before all nodes were doing it). It seems logical to me too.

Thanks for the changes.

agocorona commented 8 years ago

Ok.

The problem was due to a switch variable that more or less indicates that the computation is acting as "slave" .

  setSData WasRemote

That status variable was not shared by all the threads, so they executed the whole computation and not only the remote part. That in a single node was not observable neither in a two node cluster for some reason.

Thanks for reporting this out.