Data transfer between machines of different endian-ness appears to be broken all the way back to the 1.6 release. Here is the error report from the user:
The problem occurs in openmpi-1.6.x, openmpi-1.7, and openmpi-1.9. Now I implemented a small program which only scatters the columns of an integer matrix so that it is easier to see what goes wrong. I configured for a heterogeneous environment. Adding "-hetero-nodes" and/or "-hetero-apps" on the command line doesn't change much as you can see at the end of this email. Everything works fine, if I use only little endian or only big endian machines. Is it possible to fix the problem or do you know in which file(s) I would have to look to find the problem or do you know debug switches which would provide more information to solve the problem?
I used the following command to configure the package on my "Solaris 10 Sparc" system (the commands for my other systems are similar).
Data transfer between machines of different endian-ness appears to be broken all the way back to the 1.6 release. Here is the error report from the user:
The problem occurs in openmpi-1.6.x, openmpi-1.7, and openmpi-1.9. Now I implemented a small program which only scatters the columns of an integer matrix so that it is easier to see what goes wrong. I configured for a heterogeneous environment. Adding "-hetero-nodes" and/or "-hetero-apps" on the command line doesn't change much as you can see at the end of this email. Everything works fine, if I use only little endian or only big endian machines. Is it possible to fix the problem or do you know in which file(s) I would have to look to find the problem or do you know debug switches which would provide more information to solve the problem?
I used the following command to configure the package on my "Solaris 10 Sparc" system (the commands for my other systems are similar).
Additional detail available on the user's posting:
[http://www.open-mpi.org/community/lists/users/2012/12/20948.php]