WRF-CMake / wrf

🌀 The Weather Research and Forecasting (WRF) model with CMake support
Other
45 stars 3 forks source link

macOS questions, suggestions, relative to JOSS #22

Closed zbeekman closed 5 years ago

zbeekman commented 5 years ago

It would be awesome to see WRF make it into the homebrew package manager system in the future. This would lower the barrier to entry for researchers and tinkerers. But there are some potential issues that might prevent that. The purpose of this issue is to discuss those potential issues, and further improving ease of use on macOS.

  1. In general Homebrew doesn't like things built with brewed GCC as the C and C++ compiler. Whenever possible formulae should use native Apple clang tooling. The cmake build system throws a fatal error when the user attempts to build with the default Apple clang compiler. Is there a good reason for this? Are there known bugs, limitations, issues when building with Apple clang?
  2. Homebrew, as is the case with many package managers, picks a default MPI implementation. The implementation chosen was OpenMPI, mimicking other Linux package managers that had to pick a version. As such, asking the users to install with MPICH can cause confusion and errors if they have other scientific/numerical software installed on their systems. Changing the install instructions for macOS to instruct users to use OpenMPI rather than MPICH will result in a smoother user experience and better compatibility with extant software on the target system. I have tested with both MPICH and OpenMPI, and, AFAICT, I did not encounter any issues utilizing OpenMPI rather than MPICH. Is there a good reason for telling users to use MPICH? Could the instructions be updated to recommend installing OpenMPI on macOS?
  3. Consider adding a Brewfile to the repository, or instructions on what to put in it. From the JOSS guidelines:

    Good: A package management file such as a Gemfile or package.json or equivalent A Brewfile would be the Homebrew equivalent of this, allowing users to install all specified dependencies and execute commands in an isolated environment.

This issue will likely conclude my comments about installation on macOS.

Here are some example Brewfiles that you may wish to use.

MPICH (less prefered)

# macOS Homebrew Brewfile to install dependencies for WRF
# Usage: `brew bundle install --file=./Brewfile`
tap "homebrew/core"
# Distributed revision control system
brew "git"
# Cross-platform make
brew "cmake"
# GNU compiler collection
brew "gcc@8"
# Libraries and data formats for array-oriented scientific data
brew "netcdf"
# Library for manipulating JPEG-2000 images
brew "jasper"
# Implementation of the MPI Message Passing Interface standard
brew "mpich"

OpenMPI

# macOS Homebrew Brewfile to install dependencies for WRF
# Usage: `brew bundle install --file=./Brewfile`
tap "homebrew/core"
# Distributed revision control system
brew "git"
# Cross-platform make
brew "cmake"
# GNU compiler collection
brew "gcc@8"
# Libraries and data formats for array-oriented scientific data
brew "netcdf"
# Library for manipulating JPEG-2000 images
brew "jasper"
# High performance message passing library
brew "open-mpi"
letmaik commented 5 years ago

Apple Clang: fixed in #25

OpenMPI: There's no particular reason we recommend MPICH. For simplicitly it would be good to recommend a single implementation for Linux and macOS. It seems Ubuntu doesn't favour one over the other judging from the packages for mpich vs openmpi. Is there some statement where it says that Homebrew recommends Open MPI? In general, I'm happy to switch to Open MPI and change the CI accordingly.

Brewfile: Good idea, though if you do it for macOS you should do it for other systems as well, and there it gets more chaotic. In the end using a Brewfile would replace one line in the dependency installation docs with another. Are you sure this is beneficial?

Publishing to Homebrew: It makes sense for the future, but likely only after WRF-CMake is merged upstream, because ideally this should be driven by the UCAR folks and not rely on our fork. I think our pre-built binaries are already lowering the barrier considerably for users. Putting it in Homebrew would be the cherry on the top.

dmey commented 5 years ago

With regards to the choice of MPI implementation, I think that the main reason we originally chose to go with MPICH was mainly because of two reasons:

MPICH is supposed to be high-quality reference implementation of the latest MPI standard and the basis for derivative implementations to meet special purpose needs. Open-MPI targets the common case, both in terms of usage and network conduits.

Cray or IBM supercomputers: MPI comes installed on these machines automatically and it is based upon MPICH in both cases

The points above and more info can be found here.

zbeekman commented 5 years ago

In the end using a Brewfile would replace one line in the dependency installation docs with another. Are you sure this is beneficial?

no 😉

Publishing to Homebrew: It makes sense for the future, but likely only after WRF-CMake is merged upstream, because ideally this should be driven by the UCAR folks and not rely on our fork. I think our pre-built binaries are already lowering the barrier considerably for users. Putting it in Homebrew would be the cherry on the top.

I completely agree, that is a task/topic for the future, not for now.

Is there some statement where it says that Homebrew recommends Open MPI?

brew uses --include-build open-mpi vs brew uses --include-build mpich. I've put in a PR to make the documentation reflect the practical reality. I personally prefer MPICH, but for consistency's sake homebrew-core won't accept formulae with mpich dependencies.

zbeekman commented 5 years ago

@dmey

With regards to the choice of MPI implementation, I think that the main reason we originally chose to go with MPICH was mainly because of two reasons:

I completely agree with your reasoning. I personally prefer MPICH, as well. However, I think that if you are asking macOS users to install dependencies with Homebrew you should ask them to use OpenMPI. If they have a package like nwchem installed that uses MPI, they will have OpenMPI installed, and trying to install mpich may result in a message about how it conflicts with OpenMPI or how links could not be created in the usual places /usr/local/{bin,lib,share,etc}.

FWIW, I don't necessarily see an issue with recommending one implementation on one OS and another implementation on another OS. On linux you should keep MPICH, IMO. But users may stub their toes if you tell them to install w/ mpich on macOS. In theory, standard conforming MPIs should be interchangeable.

dmey commented 5 years ago

@zbeekman this makes sense and we can definitively change the line in the readme but we will need to handle the binaries differently as these were compiled with MPICH. @letmaik before I do this, what do you think about this? For the new binaries we can ask users to install open-mpi but for the old ones this may lead to some confusion... But perhaps we can make a note under the releases!?

letmaik commented 5 years ago

Yes, I think a note in the releases for the binaries should be fine. The user base is still small, so I don't expect a wave of complaints.

zbeekman commented 5 years ago

Do your distributed binaries package dependencies too? Or is there linkage to, e.g., /usr/local/lib/libmpi.dylib or /usr/local/lib/libmpi_mpifh.dylib?

If you can tell CMake to get mpi from /usr/local/opt/<mpich|open-mpi> that will potentially provide some robustness in the event of user brew linking/brew unlinking of MPI formulae, and if new versions are released. I don't recall if FindMPI.cmake will aggressively follow symbolic links or not, however.

I just checked the output of otool -L in my from-source build and it appears to be linking against MPICH from /usr/local/opt/mpich/... so that bodes well.

This comment has no real connection to the JOSS review, I'm just trying to wrap my head around your binary distributions. I guess I should just download one and test drive it, so far I've been testing from source installations.

letmaik commented 5 years ago

@zbeekman The distributed binaries are fully self-contained and contain dependencies as well (I guess the only real exception is the mpiexec tool, so we assume some compatibility). I would prefer to just rely on whatever FindMPI.cmake locates. So far we never had issues.

letmaik commented 5 years ago

@zbeekman I gave it a try and created my first formula! I tested it on Linuxbrew only at the moment, but it should work on macOS as well:

brew install https://raw.githubusercontent.com/WRF-CMake/WRF/letmaik/brew-formula/wrf.rb

It has a --with-debug option if you want to build the debug variant. I know that homebrew-core doesn't accept formulas with options anymore, so this may have to go if we want to benefit from automatic bottle builds via homebrew-core. For now, I think it's fine to keep the formula in our space, so no need to worry about it. Note that this only installs into the cellar, which I think is more appropriate because of the special folder structure of WRF. If it's all fine then we can put it into a separate repo with homebrew naming conventions so that the install command can be abbreviated to brew tap wrf-cmake/wrf && brew install wrf where the repo hosting the formula would be homebrew-wrf. I'm assuming this naming convention works for Linuxbrew as well, but I couldn't find any information on it. After all, there is linuxbrew-core...

zbeekman commented 5 years ago

Looks pretty good!

I have a few comments that I've made on the commits:

  1. https://github.com/WRF-CMake/WRF/commit/b80c50e520e410700c3fe94730ca3b415cc67166#r33877625 linuxbrew-core and homebrew-core both try to use open-mpi
  2. https://github.com/WRF-CMake/WRF/commit/065674aeeb84eb4ebd9ae8f43c283a183c4b4c3c#r33877593 parallel builds can be controlled by the user. It's probably best not to set this explicitly. What happens when parallel builds are used and memory runs out? Make will build as many jobs as possible if given -j.
  3. Naming the tap repository usually goes <user-org>/homebrew-<name> to allow the tap to be accessed by brew tap <user-org>/<name>.

As far as the installation layout goes, in the future it would be fairly easy and straight forwards to add a flag to install it into a more canonical directory layout.

letmaik commented 5 years ago

@zbeekman Thanks for reviewing this. I switched to open-mpi, makes sense. I also removed the forced non-parallel build. It's nasty when it happens, you get an internal compiler error. I'll move the formula over to the homebrew-wrf repo.

letmaik commented 5 years ago

@zbeekman With https://github.com/WRF-CMake/WRF/commit/9b888a6e65994aac47e9e575348b5355fc81b345 do you think this issue is resolved? Do you think we still need a Brewfile?

zbeekman commented 5 years ago

Yes, I think this issue can be closed. Thanks!

zbeekman commented 5 years ago

Also, as far as the JOSS review goes, when I say "consider" that is just a suggestion. Right now the only things that I feel I have not investigated in enough detail yet are:

  1. Building on Linux
  2. Testing out a pre-compiled binary (experimental I know) but I want to at least try it
  3. Running some minimal example to ensure that the software that is built can actually function.

Also, if the other reviewer showed up, that would certainly light a fire under me to ensure I'm not the bottleneck, but I'll wrapt things up soon. I think point 3. above is all that's left where I need some input/guidance/changes from you guys. (I'll have to double check the review checklist, to make sure.)

EDIT: s/point 2/point 3/ (i.e. 2 -> 3)

letmaik commented 5 years ago

Sounds good, we'll have the minimal example ready soon, which should give you enough to play around with.

dmey commented 5 years ago

@zbeekman thanks. https://github.com/WRF-CMake/WRF/issues/24#issuecomment-500595341 should allow you to carry out 2 and 3.