trixi-framework / Trixi.jl

Trixi.jl: Adaptive high-order numerical simulations of conservation laws in Julia
https://trixi-framework.github.io/Trixi.jl
MIT License
522 stars 102 forks source link

Maxwell's equations example #919

Open jwscook opened 2 years ago

jwscook commented 2 years ago

Hello,

Trixi is amazing and it's so nice to see it written in Julia. I was wondering if there are plans for a Maxwell's equation example? I ask this because I'd be keen to investigate Trixi as a framework for a kinetic plasma physics extension via the particle-in-cell (PIC) method. The plasma would be represented by the particles and the electromagnetic field by Trixi's excellent high-order DG implementation. The particles fly around the domain according to the Lorentz force and deposit their charge and current onto the charge density and current density fields. This acts as sources for Maxwell's equations, which are then solved and thereby provide the electromagnetic field for the Lorentz force. At the core of every PIC code is a Maxwell's equation solver. Having one that supports complicated real-world meshes, is high order, and adaptive is arguably the hardest bit.

Best, James

gregorgassner commented 2 years ago

Dear James,

thanks for your kind words and your interest in Trixi. Here are my thoughts/answers (possible in random order :-) )

Best, Gregor

ranocha commented 2 years ago
  • If you want to have Maxwell's equations asap, i.e., if you are interested in taking the accoustic solvers and implement Maxwell's equations yourself, me (and I am quite positive the whole Trixi team) is happy to help you with this.

Expanding this one a bit: The acoustic perturbation equations implemented in Trixi.jl could be used as a template for a linear hyperbolic system. We also have tutorials describing how to add a new equation system to Trixi.jl (in 1D in this case). If you want to invest some time to implement a "simple" version of Maxwell's equations (without divergence cleaning, maybe TE mode in 2D or something like that), we would be happy to assist you when you prepare a PR. You can also join the Trixi Slack and ask questions there if you like.

jwscook commented 2 years ago

Hi Gregor & Hendrik,

https://github.com/piclas-framework looks great. It's a shame that Julia wasn't around back then but it's really nice looking Fortran (lots of USE ..., ONLY: statements). I wish I'd know about it a few years ago! It would have made my life a lot easier.

It's great to hear that these things are being thought about. It remains to be seen whether I can set aside enough time to put together a Maxwell solver in Trixi partly down to the fact that my higher-ups are keen on speed-ups via distributed parallelism. There is of course a chicken and egg situation to overcome.

Regarding div-B cleaning, I randomly chose an example to run, examples/unstructured_2d_dgsem/elixir_mhd_alfven_wave.jl, and I see that the div-B error is large. Is this to be expected with for this set up? Is there anything that can be done to get rid of it?

Simulation running 'IdealGlmMhdEquations2D' with DGSEM(polydeg=7)
...
 L2 ∇⋅B      :   6.21938282e-01
 L∞ ∇⋅B      :   3.45000565e+00
...
ranocha commented 2 years ago

Regarding div-B cleaning, I randomly chose an example to run, examples/unstructured_2d_dgsem/elixir_mhd_alfven_wave.jl, and I see that the div-B error is large. Is this to be expected with for this set up? Is there anything that can be done to get rid of it?

Summoning @andrewwinters5000

andrewwinters5000 commented 2 years ago

Hi James,

These divergence errors are quite large (especially the L∞).

The main issue is that there is a bug in the computation of div(B) on the unstructured mesh type (now fixed in #925). With this bug fix I ran examples/unstructured_2d_dgsem/elixir_mhd_alfven_wave.jl again and get

Simulation running 'IdealGlmMhdEquations2D' with DGSEM(polydeg=7)
...
 L2 ∇⋅B      :   1.42117633e-03
 L∞ ∇⋅B      :   1.43386166e-02
...

Another issue regarding the div(B) errors for the Alfvén wave test case is that it has periodic boundary conditions. The divergence cleaning strategy turns the div(B) errors into propagating wave quantities such that they advect away from where they are generated. However, on periodic domains these div(B) errors just zip around in the domain (and do not dissipate much because the high-order DG method has very low numerical dissipation).

jwscook commented 2 years ago

That's much better! I've only dealt with Cartesian finite difference grids with divergencelessness baked into the numerical scheme (i.e. like constrained transport) or with codes that use the vector potential. Are these levels of div(B) error normal for unstructured (curved) meshes?

Also, how difficult would it be add a diffusion term in order to implement the mixed Dedner scheme (here is a publicly available paper describing it) to damp out the div(B) errors?

andrewwinters5000 commented 2 years ago

That's much better! I've only dealt with Cartesian finite difference grids with divergencelessness baked into the numerical scheme (i.e. like constrained transport) or with codes that use the vector potential. Are these levels of div(B) error normal for unstructured (curved) meshes?

The size of the div(B) errors is related to the overall resolution and approximation order (i.e. the number of elements and the polynomial order used is the particular DG run). The errors in the div(B) condition also converge as the resolution of the scheme is increased although the div(B) errors are typically larger than the errors in the conserved quantities. For example, running the example with the default elixir values

trixi_include("examples/unstructured_2d_dgsem/elixir_mhd_alfven_wave.jl")
...
 Variable:       rho              rho_v1           rho_v2           rho_v3           rho_e            B1               B2               B3               psi           
 L2 error:       2.20161951e-04   1.25062833e-05   1.25062833e-05   1.98343404e-05   1.85778820e-05   1.84421090e-05   1.84421090e-05   6.70864231e-06   2.00318048e-05
 Linf error:     1.83506177e-03   1.15301614e-04   1.15301614e-04   1.67579651e-04   9.32942849e-05   1.10622685e-04   1.10622685e-04   6.70673987e-05   2.19091532e-04
...
 L2 ∇⋅B      :   1.42117633e-03
 L∞ ∇⋅B      :   1.43386166e-02

and we can increase the polynomial order (that is the overall resolution power of the scheme) to have

trixi_include("examples/unstructured_2d_dgsem/elixir_mhd_alfven_wave.jl", polydeg=12, cfl=0.5)
...
 Variable:       rho              rho_v1           rho_v2           rho_v3           rho_e            B1               B2               B3               psi           
 L2 error:       1.86332565e-07   9.57695234e-09   9.57695231e-09   1.34005107e-08   6.28031891e-09   6.31246959e-09   6.31246946e-09   3.04558838e-09   1.19412967e-08
 Linf error:     3.35550507e-06   2.09262736e-07   2.09262735e-07   3.26851372e-07   4.42675929e-08   6.76423149e-08   6.76423157e-08   3.28360247e-08   1.36151629e-07
...
 L2 ∇⋅B      :   1.14646287e-06
 L∞ ∇⋅B      :   2.51589258e-05

Also, how difficult would it be add a diffusion term in order to implement the mixed Dedner scheme (here is a publicly available paper describing it) to damp out the div(B) errors?

It is quite easy to add this damping feature (or at least some version of it) because the damping acts as a source term in the approximation. For instance, the PICLas code can switch this damping ON/OFF using the input parameter DoParabolicDamping. The source term for the damping is then added during the CalcSource for the Maxwell's equations.

jwscook commented 2 years ago

Hi All. I've typed out the fluxes to a gist but I'm not having any luck. Any pointers?

ranocha commented 2 years ago

Are you sure your upwind flux does what it should? Copying the code from your gist to the REPL and

julia> u = @SVector rand(6)
6-element SVector{6, Float64} with indices SOneTo(6):
 0.6451198972871327
 0.14894160035733395
 0.5524743674532415
 0.07808219252318249
 0.7026919323982985
 0.4139358361082942

julia> eq = MaxwellEquations2D()
┌──────────────────────────────────────────────────────────────────────────────────────────────────┐
│ MaxwellEquations2D                                                                               │
│ ══════════════════                                                                               │
│ #variables: ………………………………………………… 6                                                                │
│ │ variable 1: …………………………………………… Ex                                                               │
│ │ variable 2: …………………………………………… Ey                                                               │
│ │ variable 3: …………………………………………… Ez                                                               │
│ │ variable 4: …………………………………………… Bx                                                               │
│ │ variable 5: …………………………………………… By                                                               │
│ │ variable 6: …………………………………………… Bz                                                               │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘

julia> flux_upwind(u, u, 1, eq) # should be `flux(u, 1, eq)` for a consistent numerical flux
6-element SVector{6, Float64} with indices SOneTo(6):
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0

julia> flux(u, 1, eq)
6-element SVector{6, Float64} with indices SOneTo(6):
 -0.0
  0.4139358361082942
 -0.7026919323982985
  0.0
 -0.5524743674532415
  0.14894160035733395

flux_upwind doesn't seem to work.

CC @andrewwinters5000

ranocha commented 2 years ago

By the way, there are some parts that could be improved when the code is working (i.e., after the "make it work" part but in the "make it fast" part). For example, the output of the summary_callback tells you that there are really many allocations

┌ Warning: Instability detected. Aborting
└ @ SciMLBase ~/.julia/packages/SciMLBase/mndcy/src/integrator_interface.jl:351
 ──────────────────────────────────────────────────────────────────────────────────
              Trixi.jl                     Time                   Allocations      
                                   ──────────────────────   ───────────────────────
         Tot / % measured:              6.99s / 98.8%           10.2GiB / 100%     

 Section                   ncalls     time   %tot     avg     alloc   %tot      avg
 ──────────────────────────────────────────────────────────────────────────────────
 rhs!                       1.94k    6.90s   100%  3.56ms   10.2GiB  100%   5.38MiB
   interface flux           1.94k    3.47s  50.2%  1.79ms   5.44GiB  53.5%  2.88MiB
   volume integral          1.94k    3.28s  47.6%  1.69ms   4.73GiB  46.5%  2.50MiB
   surface integral         1.94k   65.4ms  0.95%  33.7μs     0.00B  0.00%    0.00B
   prolong2interfaces       1.94k   55.3ms  0.80%  28.5μs     0.00B  0.00%    0.00B
   reset ∂u/∂t              1.94k   15.6ms  0.23%  8.03μs     0.00B  0.00%    0.00B
   Jacobian                 1.94k   7.60ms  0.11%  3.92μs     0.00B  0.00%    0.00B
   ~rhs!~                   1.94k   5.19ms  0.08%  2.67μs    706KiB  0.01%     373B
   prolong2boundaries       1.94k    196μs  0.00%   101ns     0.00B  0.00%    0.00B
   prolong2mortars          1.94k    154μs  0.00%  79.6ns     0.00B  0.00%    0.00B
   mortar flux              1.94k   66.0μs  0.00%  34.0ns     0.00B  0.00%    0.00B
   source terms             1.94k   52.2μs  0.00%  26.9ns     0.00B  0.00%    0.00B
   boundary flux            1.94k   51.4μs  0.00%  26.5ns     0.00B  0.00%    0.00B
 I/O                            2   1.49ms  0.02%   745μs    610KiB  0.01%   305KiB
   save solution                1    872μs  0.01%   872μs    586KiB  0.01%   586KiB
   ~I/O~                        2    617μs  0.01%   309μs   24.5KiB  0.00%  12.3KiB
   get element variables        1    453ns  0.00%   453ns     96.0B  0.00%    96.0B
   save mesh                    1   29.0ns  0.00%  29.0ns     0.00B  0.00%    0.00B
 ──────────────────────────────────────────────────────────────────────────────────

Basically, Trixi should be nearly allocation-free. These allocations come from creating full Vectors in between, e.g., in

  return T([FE[1], FE[2], FE[3], FH[1], FH[2], FH[3]])#, 0, 0, 0, 0, 0, 0])

You should use SVectors there, i.e.,

  return SVector(FE[1], FE[2], FE[3], FH[1], FH[2], FH[3])

Similarly, I would just use

electricfield(u, eq::MaxwellEquations2D) = SVector(u[1], u[2], u[3])
magneticfield(u, eq::MaxwellEquations2D) = SVector(u[4], u[5], u[6])

instead of

electricfield(u, eq::MaxwellEquations2D) = (@view u[1:3])
magneticfield(u, eq::MaxwellEquations2D) = (@view u[4:6])

since you can't modify the underlying u::SVector anyways.

alok commented 2 years ago

I'm interested in adding support for the 3D maxwell's equations, at least in (what I hope is) a simple case. My use case is inverse design, where this chain is optimized to match a target emissivity: control points -> mesh -> EM field -> emissivity. The real world scenario it's meant to correspond to is TPV design, so the EM field would be coming from the sun.

@gregorgassner thoughts?

gregorgassner commented 2 years ago

@alok your use case is intersting and challenging. Can you add a bit more information on what type of domain you are interested in: is it a complicated domain shape with complicated boundary conditions? This question is important, as it gives us an idea, which type of mesh you need: unstructured or structured, tetrahedra, hexahra etc.

my first thoughts are:

1.) as mentioned above, I think that it is relative straight forward to bring Maxwell's equations into Trixi.jl with a DG discretization. I would in particular recommend to go for the extended Maxwell system, where there is a divergence cleaning mechanism augmented to the Maxwell's equations. This gives a rather straight forward linear hyperbolic PDE to implement.

2.) Your application scenario that you describe is of course more than just the solver in 3D. If I understand correctly, you need a process chain from control points to the mesh - this is the tough part that needs more discussion and more information (hence my question above).

alok commented 2 years ago

Domain is compact, connected 3d region. Maybe not simply connected? Since the idea is to deform the geometry to conform to some cost function, it can get complicated.

For boundary conditions, I don't know, my knowledge of diffeqs is currently weak. Idea is to model a solar cell that's suddenly exposed to the sun, and then modeling its EM field.

The control points would most likely be vertices on the mesh, similar to joints in a wireframe. I'm thinking of the mesh as a function of its vertices, so optimizing wrt to vertex positions would deform the mesh.

gregorgassner commented 2 years ago

Hm, I am no expert in the application you want to do, so it is a bit hard for me to give 100% accurate answers. My gut feeling is, that it is quite challenging because of the re-meshing and having arbitrary vertices that should give a mesh, deforming meshes etc...for me, this sounds in principle doable with Trixi.jl, but I would estimate that this is a project with 1-2 years scope in my opinion, as Trixi.jl is not particularly well prepared for such a task at the moment. Doable, but you have to invest serious time and effort into it.