Closed fronzee closed 5 years ago
Speedup depends on many factors. Have you used regular WRF before with the same inputs and achieved better performance? Also, what kind of machine are you running this on and how big is the domain in terms of grid cells?
It is small initial test domain to know if all the tools are working correctly. 3km grid is 35x35. I havent run it before. I am running it on google cloud compute engine
Small domains like that probably don't benefit from MPI since the overhead of splitting/combining the computation is too big. I would test it with a larger grid. Also, you haven't mentioned whether you test on multiple MPI-linked nodes, or whether you run on a single VM which has multiple cores (how many?) available.
Thanks for helping. I will test with bigger domain. I am testing with single VM with 16 cores, however there were not even a slice difference calculating with 1, 6,8 or 16 cores. Is it possible to see if WRF split jobs between cores and how? What should I be looking for?
Thank you
Il giorno Dom 13 Gen 2019, 23:00 Maik Riechert notifications@github.com ha scritto:
Small domains like that probably don't benefit from MPI since the overhead of splitting/combining the computation is too big. I would test it with a larger grid. Also, you haven't mentioned whether you test on multiple MPI-linked nodes, or whether you run on a single VM which has multiple cores (how many?) available.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/WRF-CMake/WRF/issues/8#issuecomment-453869553, or mute the thread https://github.com/notifications/unsubscribe-auth/Ar1xPnyd7K556628RHnitbZO4J2Ag7nCks5vC6x6gaJpZM4Z9VMo .
Please direct any questions not related to the CMake variant of WRF to the WRF community directly, e.g. http://forum.wrfforum.com/.
@fronzee have you been able to solve your problem? If so I would close this as it does not appear to be a problem related to WRF-CMake.
No, i couldn't solve it. Maybe it was a problem with mpirun. I compiled from original source (not CMake) with mpich and now it is running.
MPICH is just one implementation of the MPI standard -- mpirun
or mpiexec
are programs to control the execution of MPI programs. The binary distributions for Linux have also been created using MPICH library (specifically, version 3.2.1). @fronzee could you share your wrf.exe
input files as well as the namelist.input
so that I can try to run it locally (simply compress and drag and drop the archive in this issue)? Also it would helpful if you could tell me your system configuration (e.g. with linux distro and version, gcc and mpiexec version) and how you run it (e.g. mpiexec ...
)-- the more details, the easier and faster will be to sort out. Thanks!
Closing as no response from the user. @fronzee feel free to reopen if the issue persists.
seems that processes does not communicate with each other and the calculation times are the same weather for 1, 6, 8 or more processes.
wrf.exe job finishes ok, but with exactly same timerun independent on core numbers using the wrf-cmake-4.0-dmpar-basic-release-linux