PTsolvers / JustRelax.jl

Pseudo-transient accelerated iterative solvers
https://ptsolvers.github.io/JustRelax.jl/dev/
MIT License
30 stars 5 forks source link

Problem with temperature advection in the Blankenbach Benchmark #108

Closed LukasFuchs closed 7 months ago

LukasFuchs commented 8 months ago

Detailed description will follow....

luraess commented 8 months ago

Thanks for looking into this!

Side note: Ideally, you'd open an issue about the "problem" and a PR for the fix.

LukasFuchs commented 8 months ago

Thanks!

Well, I also created an issue about the "problem" in JustPIC, where the actual issue seems to be. This pull request is simply for the Blankenbach benchmark. I am still getting familiar on the correct procedure and the issue was observed rather late during the last Hackaton as a reminder.

More detailed information about the problem will be put into the issue in JustPIC. Once this is resolved, the benchmark should work (I haven't tried out other advection schemes yet, which should be successful).

LukasFuchs commented 8 months ago

The blankenbach benchmark works, however, the scaling of the rms and the Nusselt number needs to be checked again. The script is mainly modified from the already existing layered convection benchmark in the Particles2D folder.

I do have some minor comments:

1) In the script, constant thermal top and bottom boundary conditions are defined in the TemperatureBoundaryConditions function. However, one needs to overwrite the temperature later on in the script to avoid overwriting by interpolation from the particles onto the grid (line 296ff)

# interpolate fields from particle to grid vertices
particle2grid!(T_buffer, pT, xvi, particles)
@views T_buffer[:, end]      .= 273.0        
@views T_buffer[:, 1]        .= 1273.0
@views thermal.T[2:end-1, :] .= T_buffer
temperature2center!(thermal)

This is rather inconvenient since the boundary conditions definition already imply this. Maybe one can add this in the particle advection scheme or in the diffusion solver?

2) I was wondering if the injection of particles does have an effect, either before or after convection? I wanted to test this as well.

3) I am also curious about the order of the operator splitting in the heat equation, that is, first solving for the diffusion part followed by the advection part. So far, I have always programmed it the other way round. Not sure if this has an effect.

4) Maybe I misunderstand the function grid2particle_flip! in line 334. Following the name of the function and the comments the temperature is interpolated from the grid vertices onto the particles. However, the function is executed after the advection of the particles. Thus, the interpolation should be the other way round, from the particles onto the grid, correct?

Below the field of temperature (on the grid), horizontal and vertical velocity, and temperature on the particles after 5000 iterations for a Rayleigh number of 1e4 5000 Ra1e4

and Ra = 1e6 5000 Ra1e6

The subgrid diffusion seems to be weaker for the Ra = 1e4 than the Ra = 1e6 case. I assume this goes away using a different advection scheme.

albert-de-montserrat commented 8 months ago
  1. In the script, constant thermal top and bottom boundary conditions are defined in the TemperatureBoundaryConditions function. However, one needs to overwrite the temperature later on in the script to avoid overwriting by interpolation from the particles onto the grid (line 296ff)
# interpolate fields from particle to grid vertices
particle2grid!(T_buffer, pT, xvi, particles)
@views T_buffer[:, end]      .= 273.0        
@views T_buffer[:, 1]        .= 1273.0
@views thermal.T[2:end-1, :] .= T_buffer
temperature2center!(thermal)

This is rather inconvenient since the boundary conditions definition already imply this. Maybe one can add this in the particle advection scheme or in the diffusion solver?

Completely agree, I will address this in a incoming PR by modifying the BC objects.

  1. I was wondering if the injection of particles does have an effect, either before or after convection? I wanted to test this as well.

I have been also testing an improved injection where the T of the injected particle is interpolated (inverse weighted distance) from the other particles in the same cell (this is in the JustPIC#adm/injection2 branch if you want to test it) -opposite to current interpolation from the grid vertices.

  1. I am also curious about the order of the operator splitting in the heat equation, that is, first solving for the diffusion part followed by the advection part. So far, I have always programmed it the other way round. Not sure if this has an effect.
  2. Maybe I misunderstand the function grid2particle_flip! in line 334. Following the name of the function and the comments the temperature is interpolated from the grid vertices onto the particles. However, the function is executed after the advection of the particles. Thus, the interpolation should be the other way round, from the particles onto the grid, correct?

I believe you may be correct. I'm trying this right now:

 # Advection --------------------
        pT0.data .= pT.data
        grid2particle_flip!(pT, xvi, T_buffer, Told_buffer, particles)        
        subgrid_diffusion!(pT, pT0, pPhases, rheology, stokes, particles, di, dt)
        # advect particles in space
        advection_RK!(particles, @velocity(stokes), grid_vx, grid_vy, dt, 2 / 3)
        # clean_particles!(particles, xvi, particle_args)
        # advect particles in memory
        move_particles!(particles, xvi, particle_args)
        # check if we need to inject particles
        inject = check_injection(particles)
        # inject && break
        inject && inject_particles_phase!(particles, pPhases, (pT, ), (T_buffer, ), xvi)
        # update phase ratios
        @parallel (@idx ni) phase_ratios_center(phase_ratios.center, pPhases)        

where

import JustRelax.compute_ρCp

function subgrid_diffusion!(pT, pT0, pPhases, rheology, stokes, particles, di, dt)
    ni = size(pT)
    @parallel (@idx ni) subgrid_diffusion!(pT, pT0, pPhases, rheology, stokes.P, particles.index, di, dt)
end

@parallel_indices (I...) function subgrid_diffusion!(pT, pT0, pPhases, rheology, P, index, di, dt)

    P_cell = P[I...] # this is currently the P at the center of the cell, it should be interpolated to the particles 

    for ip in JustRelax.cellaxes(pT)
        # early escape if there is no particle in this memory locaitons
        doskip(index, ip, I...) && continue

        pT0ᵢ = @cell pT0[ip, I...]
        pTᵢ = @cell pT[ip, I...]
        phase = Int(@cell(pPhases[ip, I...]))
        argsᵢ = (; T = pTᵢ, P = P_cell)
        # dimensionless numerical diffusion coefficient (0 ≤ d ≤ 1)
        d = 1
        # Compute the characteristic timescale `dt₀` of the local cell
        ρCp = compute_ρCp(rheology, phase, argsᵢ)
        K = compute_conductivity(rheology, phase, argsᵢ)
        sum_dxi = mapreduce(x-> inv(x)^2, +, di)
        dt₀ = ρCp / (2 * K * sum_dxi)
        # subgrid diffusion of the i-th particle
        @cell pT[ip, I...] = pT0ᵢ - (pT0ᵢ - pTᵢ) * exp(-d * dt / dt₀)
    end

    return nothing
end

There's still something slightly off as there are still some instabilities, but it looks better (Ra=1e6, nx = ny = 100)

image
albert-de-montserrat commented 8 months ago

It would be great to also have a non-dimensional version. Here and here some examples of how non-dimensionalize with GeoParams

LukasFuchs commented 8 months ago

Thanks for the examples.

I am working on the non-dimensionalization as well, however, within the chapter of Gerya's book, the velocity RMS and Nu number are calculated from the dimensional values and scaled directly. So that should already work, even without the scaling.

Once one scales it with GeoParams, RMS and Nu number can be calculated without the scaling constants.

LukasFuchs commented 8 months ago

Ok, so I updated the particle advection as @albert-de-montserrat suggested (branch is up to date).

Following the description of Gerya, I also calculated the Nusselt number and the velocity rms within the script (corresponding equations are listed in the script as well).

Unfortunately, the values are quite off from the benchmark values. Strangely, they fit a little better for the high Ra cases.

For Ra = 1e4, the Benchmark values are: Nu = 4.8844 and V_rms = 42.865 The code gives: Nu ~ 8 and V_rms ~ 21 (I forgot to write the down, therefore the approximate).

For Ra1e6, the Benchmark values are: Nu 21.972 and V_rms = 833.99 The code give: Nu ~ 21 and V_rms ~ 200.

I add the final fields and the time series below. Ra = 1e4 5000 Time_Series_V_Nu

Ra = 1e6 5000 Time_Series_V_Nu

The fields look for Ra = 1e4 rather fine, for Ra = 1e6 it still looks rather chaotic and does not seem to reach a steady state which it should.

I still want to test it for different advection schemes (WENO5 and upwind), but simply run out of time today. Not sure if I should leave the PR as a draft though.

boriskaus commented 8 months ago

That is an enormous difference in Blankenbach terms, as far as I recall from my earlier tests (particularly at smaller Ra). Makes me suspect that the parameters are off or that there is something iffy with the way you compute Nu

boriskaus commented 8 months ago

Here the same with LaMEM.jl:

# Blankenbach setup using LaMEM
using LaMEM, GeophysicalModelGenerator, Statistics

# Parameters:
L = 1000;
g = 10;
η = 1e23
ρ = 4000
α = 2.5e-5
cp= 1250
k = 5;
ΔT = 1000

κ = k/(ρ*cp)
Ra =  α*g*ρ*ΔT*(L*1e3)^3/η/κ
@info Ra

# Create model
model = Model(Grid(nel=(32,32), x=[-L/2,L/2],z=[-L,0]), 
          Time(nstep_max=10e3, dt_min=1e-3, dt=0.1, dt_max=1, time_end=1e5, nstep_out=100), 
          SolutionParams(gravity=[0,0,-g], act_temp_diff=1),
          Solver(PETSc_options=["-snes_max_it 1 -snes_rtol 1e-3"]),
          BoundaryConditions(temp_bot=ΔT, temp_top=0),
          Output(out_dir="BB_1e4"))

# Set material properties
mantle = Phase(ID=0,Name="mantle",eta=η,rho=ρ, Cp=cp, alpha=α, k=k)
add_phase!(model, mantle)

# Initial thermal structure
AddLayer!(model; zlim=(-L,0), phase=ConstantPhase(0),T= LinearTemp(Ttop=0, Tbot=ΔT) )    # Initial linear T
AddBox!(model; xlim=(350,400),zlim=(-900,-800), phase=ConstantPhase(0), T=ConstantTemp(T=ΔT) ) # Pertubation

# run lamem
run_lamem(model)

# postprocessing
average(a) = 0.25*(a[2:end,2:end] + a[1:end-1,2:end]  + a[2:end,1:end-1]  + a[1:end-1,1:end-1])

function postprocess(model, L, κ)
    Timestep, FileNames, _ = Read_LaMEM_simulation(model);

    Vrms, time, Nu = [], [], []
    for t in Timestep
        data = Read_LaMEM_timestep(model, t, fields=("velocity", "temperature"))     # read data

        # extract 2D velocity, temperature fields @ vertices
        Vx, Vz = data[1].fields.velocity[1][:,1,:], data[1].fields.velocity[3][:,1,:]
        T      = data[1].fields.temperature[:,1,:]
        Vx_c, Vz_c = average(Vx), average(Vz)
        SecYear = 3600*24*365.25;
        Vx_c .*= 1e-2/SecYear   # cm/yr -> m/s
        Vz_c .*= 1e-2/SecYear   # cm/yr -> m/s
        Δx, Δz = data[1].x[2,2,2]- data[1].x[1,1,1], data[1].z[2,2,2]- data[1].z[1,1,1]

        # Nusselt number
        dTdz = (T[:,end] - T[:,end-1])/Δz
        dTdz_c = (dTdz[2:end] + dTdz[1:end-1])/2
        nu   = -L*1e3 * sum(dTdz_c*Δx)/(length(dTdz_c)*ΔT*Δx)       

        vrms = L*1e3/κ*sqrt(sum((Vx_c.^2 + Vz_c.^2)*Δx*Δz)/(L*1e3)^2) # definition in Blankenbach et al. 1989
        Vrms = push!(Vrms, vrms)
        Nu   = push!(Nu, nu)
        time = push!(time, t)
    end

    return time, Vrms, Nu
end

time, Vrms, Nu = postprocess(model, L, κ)

p1 = plot(time[2:end],   Nu[2:end], ylabel="Nu", label=:none)
p2 = plot(time[2:end], Vrms[2:end], xlabel="time", ylabel="Vrms", label=:none)
plot(p1,p2,layout=(2,1))

Ra=1e4:

Screenshot 2024-03-08 at 07 40 54 Screenshot 2024-03-08 at 07 53 05
julia> mean(Nu[end-20:end])
5.064332782699948
julia> mean(Vrms[end-20:end])
42.45933897065953

Ra=1e6:

Screenshot 2024-03-08 at 07 26 10 Screenshot 2024-03-08 at 07 43 46
julia> mean(Vrms[end-20:end])
799.2302791492257
julia> mean(Nu[end-20:end])
14.814388537633985

As I'm not using grid refinement in the boundary layers here, the Nu number is quite a bit off. Vrms is rather close.

albert-de-montserrat commented 8 months ago

thanks @boriskaus !

LukasFuchs commented 8 months ago

Thanks @boriskaus

I also tested the benchmark with the dimensional version of my MATLAB code (explicit diffusion solver, semi-lagrangian advection) and applied the same equations as Gerya (and used in the julia version here) for Nu and Vrms (Those are essentially the same equations as used in the Blankenbach benchmark).

However, I do get the same values as in the scaled version of my code (the number are still a bit of, probably due to the constant grid etc.), but I am pretty certain the equations implemented in the julia version are correct (my code can be found here: https://github.com/LukasFuchs/FDCSGm/tree/dev; under MixedHeatedSystem/ThermalConvection; just in case).

Looking at what Boris used in his julia LaMEM code, the equations are the same. @boriskaus what absolute velocity [m/s] do you get? In my case the velocity is slightly higher (~8e-11 m/s) whereas the julia code gives 2e-11 m/s after 3000 iterations for Ra = 1e4. Also, it does not look like the julia code reaches steady state after 3000 iterations yet (the final time, though is the same). I have no idea what is happening there. What structure is the density stored at?

I do get similar fields as the one Boris showed for the LaMEM case:

Ra 1e4 Field3000 TimeSeries Tmean_end (This is the horizontally averaged temperature profile)

Ra 1e6 Field3000 TimeSeries (The high Ra case is much closer to the Benchmark values of higher resolutions, here I use 51) Tmean_end

In comparison this is what I get from the recent julia version (after 3000 iterations each): Ra1e4 3000 Time_Series_V_Nu T_profile_3001 (The profile looks off as well)

Ra1e6 3000 Time_Series_V_Nu T_profile_3001 (The dynamics look off too)

albert-de-montserrat commented 8 months ago

There's certainly something off, looks like there's too much diffusion and I get the same with WENO. I'm hunting for potential bugs now...

LukasFuchs commented 8 months ago

Ok, so you tried WENO already, good.

Sad to hear that it is not improving. I will then try to look into the upwind option (there might be some numerical diffusion, but should still work)

LukasFuchs commented 7 months ago

So the dimensional and non-dimensional version of the Blankenbach benchmark is working using the

advection scheme (tested for Ra = 1e4 and 1e6). The Nusselt number and V_rms are within the range of the benchmark values considering a regular grid spacing used here and additional changes with respect. That is:

Ra = 1e4 Particles; Nu $\approx$ 5.02, V_rms $\approx$ 44.49 WENO5; Nu $\approx$ 5.05, V_rms $\approx$ 44.62

Ra = 1e6 Particles; Nu $\approx$ 20.30, V_rms $\approx$ 807.82 WENO5; Nu $\approx$ 19.86, V_rms $\approx$ 961.07

Below the snapshot of each mode after 6000 iterations: Ra1e4 Particles 6000 WENO5 6000

Ra1e6 Particles 6000 WENO5 6000

I consider this benchmark and pull request as finished.

aelligp commented 7 months ago

Thank you very much @LukasFuchs. I activated the CI tests, could you correct the spelling mistakes, after that we can approve and merge.

In the _typo.toml file in the root add ND = "ND".

LukasFuchs commented 7 months ago

Well, I hope simply renaming the files is sufficient enough!

codecov-commenter commented 7 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Additional details and impacted files

:loudspeaker: Thoughts on this report? Let us know!

boriskaus commented 7 months ago

Great that it works! So what was the issue ?

LukasFuchs commented 7 months ago

Looks really good thanks a lot again! If you could remove these couple of commented lines from the scripts it looks ready to be merged.

Done!