Closed ranocha closed 2 months ago
Attention: Patch coverage is 99.66887%
with 1 line
in your changes missing coverage. Please review.
Project coverage is 97.57%. Comparing base (
8fcba88
) to head (d4f9dd6
). Report is 1 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
src/equations/serre_green_naghdi_1d.jl | 99.64% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Another question, which just came into my mind: What do we do if you pass bathymetry_flat
to the equation, but then set a non-constant bathymetry in the initial condition. I haven't thought about that before. Should this raise a warning or an error or nothing?
Another question, which just came into my mind: What do we do if you pass
bathymetry_flat
to the equation, but then set a non-constant bathymetry in the initial condition. I haven't thought about that before. Should this raise a warning or an error or nothing?
Where could we add such a check? My first idea was to look at create_cache
, but that doesn't have the initial condition
Another question, which just came into my mind: What do we do if you pass bathymetry_flat to the equation, but then set a non-constant bathymetry in the initial condition. I haven't thought about that before. Should this raise a warning or an error or nothing?
Where could we add such a check? My first idea was to look at create_cache, but that doesn't have the initial condition
It could live in Semidiscretization
, but then I would only add such a check when all equations have a field bathymetry
(or bathymetry_type
), i.e. after finishing #130.
Now we have a bunch of conflicts to solve... :roll_eyes:
Now we have a bunch of conflicts to solve... 🙄
Done
Thanks!
3. Can you report a convergence test with the soliton solution?
julia> convergence_test("examples/serre_green_naghdi_1d/serre_green_naghdi_soliton.jl", 5, N = 64)
####################################################################################################
l2
η v D
N error EOC N error EOC N error EOC
64 1.25e-01 - 64 3.54e-01 - 64 0.00e+00 -
128 1.43e-02 3.12 128 3.97e-02 3.15 128 0.00e+00 NaN
256 9.90e-04 3.85 256 2.74e-03 3.86 256 0.00e+00 NaN
512 6.23e-05 3.99 512 1.73e-04 3.99 512 0.00e+00 NaN
1024 3.29e-06 4.24 1024 9.13e-06 4.24 1024 0.00e+00 NaN
mean 3.80 mean 3.81 mean NaN
----------------------------------------------------------------------------------------------------
linf
η v D
N error EOC N error EOC N error EOC
64 5.74e-02 - 64 1.61e-01 - 64 0.00e+00 -
128 7.23e-03 2.99 128 1.94e-02 3.05 128 0.00e+00 NaN
256 5.07e-04 3.83 256 1.36e-03 3.83 256 0.00e+00 NaN
512 3.19e-05 3.99 512 8.64e-05 3.98 512 0.00e+00 NaN
1024 1.63e-06 4.29 1024 4.44e-06 4.28 1024 0.00e+00 NaN
mean 3.78 mean 3.79 mean NaN
julia> convergence_test("examples/serre_green_naghdi_1d/serre_green_naghdi_soliton_upwind.jl", 5, N = 64)
l2
η v D
N error EOC N error EOC N error EOC
64 1.25e-01 - 64 3.55e-01 - 64 0.00e+00 -
128 1.43e-02 3.13 128 3.99e-02 3.15 128 0.00e+00 NaN
256 9.78e-04 3.87 256 2.74e-03 3.86 256 0.00e+00 NaN
512 6.21e-05 3.98 512 1.75e-04 3.97 512 0.00e+00 NaN
1024 3.85e-06 4.01 1024 1.08e-05 4.02 1024 0.00e+00 NaN
mean 3.75 mean 3.75 mean NaN
----------------------------------------------------------------------------------------------------
linf
η v D
N error EOC N error EOC N error EOC
64 5.72e-02 - 64 1.61e-01 - 64 0.00e+00 -
128 7.26e-03 2.98 128 1.95e-02 3.04 128 0.00e+00 NaN
256 5.18e-04 3.81 256 1.43e-03 3.78 256 0.00e+00 NaN
512 3.38e-05 3.94 512 9.37e-05 3.93 512 0.00e+00 NaN
1024 1.94e-06 4.12 1024 5.49e-06 4.09 1024 0.00e+00 NaN
mean 3.71 mean 3.71 mean NaN
Both elixirs use an accuracy order of 4.
- Maybe a strange one: But why didn't the test values change after various fixes? Since there were some true changes in the numerics (the psi fix and the fix in the
entropy_modified
) I would have assumed this should have changed the test values. Is it because the changes are so small that they are within the tolerance? And if so, can you maybe run the elixirs again and update the values with the new values?
I re-ran the DIngemans setup with the current code and updated the test values - only the conservation error of the momentum changed slightly (within tolerances).
2. Did you perform some performance tests similar to section 11.10 in your preprint and compared it to the code from your paper? I would be happy if we could reach a similar performance here than you report in the paper.
That lead me down a rabbit hole 😅 Along the way, I found a quite nasty bug (5a4dbb2265b568b459e21df11cd7128e61045391) that manifested for second-order operators. With the fix, I get the following results
julia> trixi_include("examples/serre_green_naghdi_1d/serre_green_naghdi_conservation.jl", io = devnull, bathymetry_type = bathymetry_variable, N = 1_000);
[ Info: You just called `trixi_include`. Julia may now compile the code, please be patient.
julia> @benchmark solve(ode, Tsit5(); save_everystep = false, callback = callbacks)
BenchmarkTools.Trial: 26 samples with 1 evaluation.
Range (min … max): 190.737 ms … 196.548 ms ┊ GC (min … max): 3.77% … 4.10%
Time (median): 193.543 ms ┊ GC (median): 3.69%
Time (mean ± σ): 193.423 ms ± 1.745 ms ┊ GC (mean ± σ): 3.67% ± 0.23%
█ ▁ ▁ ▁▁▁▁ █ ▁ ▁ █ ▁█ ▁▁ ▁ ▁ ▁ ▁▁ █
█▁█▁█▁▁▁▁▁▁▁████▁█▁▁▁▁█▁▁▁▁█▁█▁██▁██▁█▁▁▁█▁█▁▁▁▁▁▁▁▁▁▁▁▁▁██▁█ ▁
191 ms Histogram: frequency by time 197 ms <
Memory estimate: 684.67 MiB, allocs estimate: 67864.
julia> trixi_include("examples/serre_green_naghdi_1d/serre_green_naghdi_conservation.jl", io = devnull, bathymetry_type = bathymetry_variable, N = 2_000);
[ Info: You just called `trixi_include`. Julia may now compile the code, please be patient.
julia> @benchmark solve(ode, Tsit5(); save_everystep = false, callback = callbacks)
BenchmarkTools.Trial: 13 samples with 1 evaluation.
Range (min … max): 381.811 ms … 436.045 ms ┊ GC (min … max): 2.50% … 4.59%
Time (median): 409.998 ms ┊ GC (median): 3.44%
Time (mean ± σ): 406.086 ms ± 17.010 ms ┊ GC (mean ± σ): 3.40% ± 0.64%
█▁ ▁ ▁ ▁█ ▁ ▁▁ ▁ ▁
██▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁██▁█▁▁██▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁█ ▁
382 ms Histogram: frequency by time 436 ms <
Memory estimate: 1.36 GiB, allocs estimate: 72824.
julia> trixi_include("examples/serre_green_naghdi_1d/serre_green_naghdi_conservation.jl", io = devnull, bathymetry_type = bathymetry_variable, N = 3_000);
[ Info: You just called `trixi_include`. Julia may now compile the code, please be patient.
julia> @benchmark solve(ode, Tsit5(); save_everystep = false, callback = callbacks)
BenchmarkTools.Trial: 9 samples with 1 evaluation.
Range (min … max): 540.357 ms … 621.959 ms ┊ GC (min … max): 1.34% … 1.74%
Time (median): 597.424 ms ┊ GC (median): 1.82%
Time (mean ± σ): 580.736 ms ± 33.005 ms ┊ GC (mean ± σ): 1.85% ± 0.28%
█
█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▁▇▁▁▇▁▁▁▁▇▁▁▁▁▁▁▁▇ ▁
540 ms Histogram: frequency by time 622 ms <
Memory estimate: 2.04 GiB, allocs estimate: 84813.
julia> trixi_include("examples/serre_green_naghdi_1d/serre_green_naghdi_conservation.jl", io = devnull, bathymetry_type = bathymetry_variable, N = 4_000);
[ Info: You just called `trixi_include`. Julia may now compile the code, please be patient.
julia> @benchmark solve(ode, Tsit5(); save_everystep = false, callback = callbacks)
BenchmarkTools.Trial: 7 samples with 1 evaluation.
Range (min … max): 816.021 ms … 858.928 ms ┊ GC (min … max): 1.67% … 1.60%
Time (median): 825.590 ms ┊ GC (median): 1.66%
Time (mean ± σ): 829.784 ms ± 14.942 ms ┊ GC (mean ± σ): 1.68% ± 0.08%
██ █ █ █ █ █
██▁▁▁▁▁▁▁█▁▁▁█▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
816 ms Histogram: frequency by time 859 ms <
Memory estimate: 2.71 GiB, allocs estimate: 84801.
julia> trixi_include("examples/serre_green_naghdi_1d/serre_green_naghdi_conservation.jl", io = devnull, bathymetry_type = bathymetry_variable, N = 5_000);
[ Info: You just called `trixi_include`. Julia may now compile the code, please be patient.
julia> @benchmark solve(ode, Tsit5(); save_everystep = false, callback = callbacks)
BenchmarkTools.Trial: 5 samples with 1 evaluation.
Range (min … max): 1.006 s … 1.042 s ┊ GC (min … max): 1.70% … 1.98%
Time (median): 1.019 s ┊ GC (median): 1.73%
Time (mean ± σ): 1.022 s ± 13.656 ms ┊ GC (mean ± σ): 1.82% ± 0.24%
█ █ █ █ █
█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
1.01 s Histogram: frequency by time 1.04 s <
Memory estimate: 3.39 GiB, allocs estimate: 84821.
Thus, the results are roughly comparable (a little bit slower but often within standard deviation).
Thus, the results are roughly comparable (a little bit slower but often within standard deviation).
The results look good. Thanks a lot!
In general we have some code duplication. Would it be possible and wise to put some recurring patterns into functions, e.g., thing like
if equations.bathymetry_type isa BathymetryMildSlope factor = 0.75 else # equations.bathymetry_type isa BathymetryVariable factor = 1.0 end
Done in c7ff29a129e56ec43da45bb35944eb98caa9473c and 0062720c18c8e60a7e1970f333fc5ec52f373497
Now we only have this nasty codecov error on MacOS. But since it seems to fail consistently, I would like to find a fix before merging this PR.
This is the next step of #129