mjr-deltares / parallel-modflow-dsd23

Parallel MODFLOW 6 class at the Deltares Software Days 2023
Creative Commons Zero v1.0 Universal
2 stars 0 forks source link

Compile error on Ubuntu 22.04 #6

Closed dbrakenhoff closed 10 months ago

dbrakenhoff commented 10 months ago

Hi @mjr-deltares,

I'm having some issues compiling MF6 on my ubuntu 22.04 machine using the pixi workflow (I also tried building it in a mamba environment step by step according to the instructions from the other parallel-modflow-class repo.

I did manage to build the binaries on WSL using the pixi workflow, and I've compared the output logs but I can't see any meaningful differences between the two to explain why it isn't succeeding on my machine.

This is the meson setup-log for my machine (which looks fine when compared to a build on WSL): meson_setup.log

And the meson install log (compiler error when linking mf6): meson_install.log

Anyway, if you have some time, maybe you have some inkling of what might be going wrong? That would be greatly appreciated. For the rest, really cool developments and nice material to introduce it all!

Cheers,

Davíd

mjr-deltares commented 10 months ago

Hi @dbrakenhoff , thanks for reporting this and for your encouraging remarks!

A quick look at the log shows that the build is picking up dependencies from outside directories:

home/david/github/petsc/arch-linux-c-debug

and I don't think that I should. Could you maybe clean out your LD_LIBRARY_PATH and PKG_CONFIG_PATH variables, and try again? It is important that you really use all libraries from the .pixi/env path.

dbrakenhoff commented 10 months ago

Thanks for the quick reply @mjr-deltares!

That solved it, partially. My bad, I accidentally started with the instructions from the other modflow-parallel-class repo. But now I'm getting a no MPI error on the final model run in test_installation.py. The final part of the stdout:

...
FloPy is using the following executable to run the model: ../../../../../.local/share/flopy/bin/mf6
FloPy is using /home/david/github/parallel-modflow-dsd23/.pixi/env/bin/orterun to run /home/david/.local/share/flopy/bin/mf6 on 2 processors.

ERROR REPORT:

  1. Can not run parallel mode with this executable: no MPI
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------

ERROR REPORT:

  1. Can not run parallel mode with this executable: no MPI
Traceback (most recent call last):
  File "/home/david/github/parallel-modflow-dsd23/installation/test_installation.py", line 27, in <module>
    assert success, "base parallel model did not run"
AssertionError: base parallel model did not run

It doesn't seem like mf6 is missing libraries? This is the result of ldd mf6: ldd.log

Full ./pixi run install log: pixi.log

mjr-deltares commented 10 months ago

Hi @dbrakenhoff , I think you are using an executable different from the one built against MPI. I believe you should see something like:

FloPy is using the following executable to run the model: ../../../.pixi/env/bin/mf6

Can you check why yours is

FloPy is using the following executable to run the model: ../../../../../.local/share/flopy/bin/mf6

mjr-deltares commented 10 months ago

Have you maybe installed through pixi while now also having your another python environment (that contains a flopy installation) activated?

dbrakenhoff commented 10 months ago

Argh, my bad, I should've caught that. Thanks a lot for spotting my mistake in any case. It works now!

Something weird is going on with flopy it seems. When I'm in the pixi environment, it points to the correct mf6. But after I import flopy, it somehow reverts back to my base mf6. Also the new simulation object does not persist the exe_name from the first simulation before it was split, which also confused me for a bit:

image

But these are flopy issues, which I'll raise there once I figure this out a bit more. So this issue is resolved! Thanks for your help!