Closed mrhardman closed 5 months ago
periodic_box_low_res.zip
Low resolution runs with rhostar = 0.1
and box lengths of 1 in r and z. Neither run uses radial diffusion, but the run 2D-periodic-gk.toml
uses a gyroaverage. Clear differences can be seen where the gyrokinetic run tends to have smoother features and smaller amplitudes in the peaks and troughs of the solution, whereas the run 2D-periodic-dk.toml
obtains grid scale behaviour. A careful study of this 2D model might be useful for understanding the slab ITG instability. @johnomotani @mabarnes @LucasMontoya4. Further efficiency improvements in the GK operator are likely required to go to higher resolutions.
Marking this PR as ready for review as shared-memory MPI is now working, looking for feedback before implementing distributed-memory MPI support. @johnomotani @mabarnes
Edit: Some tests still seem to be failing on CI...
Commit https://github.com/mabarnes/moment_kinetics/pull/187/commits/cd85aa7ae8cdc5bf66340eb726dabe6238e74a1c passes all tests apart from tests in parallel on MacOS, which mysteriously time out after seeming to finish with the tests reportedly passing (although a different number of tests seem to be carried out as compared to the Ubuntu case.
Commit cd85aa7 passes all tests apart from tests in parallel on MacOS, which mysteriously time out after seeming to finish with the tests reportedly passing (although a different number of tests seem to be carried out as compared to the Ubuntu case.
The macOS parallel tests do this fairly frequently. Often just re-running the jobs lets them pass, so I've not investigated further.
The macOS parallel tests were being slow, so I cut them down to 1 test run (4 process, no --long
). On the Linux job, we test 4 processes, 3 processes, and finally 2 processes. On the 2-processes test we use the --long
flag - that's why there will be some more tests reported for that run than there are in the macOS job.
This issue can act as the tracker: https://github.com/mabarnes/moment_kinetics/issues/186
It looks like some of the parallel tests might have been broken, rerunning the tests as I don't understand how that can be.
It looks like parallel tests sometimes (often?) fail and sometimes pass, which is a bit suspicious. It would be good to add a 'debug check' that uses the gyroaverage - I'll have a look at that now.
Ah, the PR tests (in parallel) were failing because of changes from #203 (which had been merged into master
. I've fixed that now (I think), but the CI runners are failing to install OpenMPI. I don't think that failure is anything to do with us, so hopefully it will sort itself out in a little while.
To do list: