PrincetonUniversity / athena

Athena++ radiation GRMHD code and adaptive mesh refinement (AMR) framework
https://www.athena-astro.app
BSD 3-Clause "New" or "Revised" License
239 stars 129 forks source link

Passive scalar issue with AMR #365

Closed morganemacleod closed 2 years ago

morganemacleod commented 4 years ago

Hi Folks,

I have an issue that I first noticed in my own problem and have since been able to reproduce (though to a somewhat less extreme extent) in the blast with hydro.

It stems from using adaptive refinement with passive scalars. I think that when meshblocks are de-refined, their boundaries seem to introduce incorrect values of the scalar concentration.

To reproduce I modified the blast problem by introducing a single passive scalar. I commented the changes to blast.cpp (attached) with "MM": blast_scalar.cpp.txt (rename -> .cpp)

I configure with: python configure.py --prob blast_scalar --coord spherical_polar --flux hllc -mpi -hdf5 --hdf5_path /opt/local --cxx g++ --nscalars=1

And I run with the following input file, which increases the number of AMR levels and changes the threshold for refinement vs the default. athinput.blast_sph.txt (again, remove the .txt)

I read into a python notebook as follows:

import matplotlib.pyplot as plt
import numpy as np
import athena_read as ar

%matplotlib inline

d = ar.athdf("/Users/morganmacleod/DATA/athenaruns/blast/scalar/Blast.out1.00001.athdf")

plt.figure(figsize=(8,8))
th_ind = int(d['rho'].shape[1]/2)
rr,pp = np.meshgrid(d['x1v'],d['x3v'])
plt.pcolormesh(rr*np.cos(pp),rr*np.sin(pp),d['r0'][:,th_ind,:])
plt.colorbar(label='r')
plt.xlabel('x')
plt.ylabel('y')
plt.axis('equal')

To generate the following: download-6 What I see here, is that in the interior of the "bubble" where the mesh has de-refined, along the boundaries there are some artifacts introduced.

When I ran this with my own problem (which unfortunately requires other customizations of athena) I was encountering values of r-> 1e20 ! But I haven't been able to reproduce that extreme behavior in the blast problem -- I thought that this, based on the unmodified public repo, was probably a more useful case.

With a fixed mesh, for example, we see no artifacts in the scalar concentration: download-7

Thanks for the help & insight.

Morgan

felker commented 4 years ago

Sounds like a bug in the derefinement operator applied to the passive scalars--- to confirm it is not a more general problem, can you try running with a very high derefine_count to effectively disable derefinement?

c-white commented 4 years ago

There may well be an issue, but can we confirm this is not a plotting artifact relating to mesh refinement (static or adaptive)?

For quantities that vary in the direction off the plane of the slice, one generally sees weird things in plots. The problem is the data reader is casting everything onto the finest grid, so the cell that gets plotted will have a distance from the plotting slice that varies depending on refinement level. The highest refinement levels are plotting the cells that are right next to the midplane here, while the lower refinement levels are essentially plotting cell quantities that are volume-averaged over a volume that extends further from the plotting plane.

One way to test this is to cast the data to the lowest refinement level in the read step (level=0).

morganemacleod commented 4 years ago

@felker, @c-white Thanks for both of these thoughts. I need to do a little more experimenting to confirm everything. More soon.

morganemacleod commented 4 years ago

A little bit more progress to report:

Because @c-white 's point about plotting is well taken I tried two things (in addition to the test that Kyle mentioned) -- 1) Cartesian coordinates 2) printing values directly (skipping the plotting step)

I was looking at the maximum value of the scalar concentration at t=0.1 in the blast problem (initializing a scalar r=1 within the region of amplified pressure)

Spherical Polar Coords: Fixed mesh r_max = 1.0 SMR = 1.0 AMR (refine only) = 1.085 AMR = 1.16

Cartesian Coords: Fixed mesh r_max = 1.0 SMR = 1.0 AMR (refine only) = 1.0013 AMR = 1.0046

To me, this suggests a couple of things: the issues I am seeing with artifacts has to do with AMR in spherical polar but not exclusively with derefinement, as I originally was speculating above.

For reference, here's an SMR example of the equivalent of the earlier spherical polar images. download-8 and here is AMR, but with derefine_count = very large download-9

Would definitely appreciate advice on where to go from here.

Thank you! Morgan

felker commented 4 years ago

Now that I think about it, I cannot recall if AMR+curvilinear coordinates with passive scalars has actually ever been tested.

There might be a bad assumption in the coupled prolongation/restriction of the mass fluxes and scalar concentration in the presence of curved geometries...

morganemacleod commented 4 years ago

Edit! I just tried cartesian again, and when I make the meshblock smaller (4x4x4), which is identical to what I was doing in spherical polar, I get scalar concentrations ~1.07.

In other words, I'm also getting similar behavior in cartesian coordinates as spherical polar. Here's an image.

download-10

One difference is that this imposes more creation/destruction of meshblocks over the course of the simulation.

Sorry for the false lead above.

Morgan

felker commented 4 years ago

Does it occur with larger meshblocks?

morganemacleod commented 4 years ago

Yes, the occurrence of scalar concentrations well-above 1.0 (ie not in the the 10th decimal or something) happens with different size meshblocks. With larger meshblocks, the accumulated maximum departure from 1.0 is less, with smaller meshblocks it is further. Re: my comments above, I had tested cartesian and spherical polar with different size meshblocks originally. It seems that the artifacts have to do with when meshblocks are refined/derefined because they don't arise in SMR, and they are exacerbated by smaller meshblocks which leads to more create/destroy counts over the course of the simulation.

felker commented 4 years ago

@morganemacleod which reconstruction method and time integrator were you using for this problem?

felker commented 4 years ago

@dfielding14 can you add the details of your similar problem here?

morganemacleod commented 4 years ago

Hi @felker, from the input file in the first comment it looks like this was with VL2 for time integration and xorder=2 for reconstruction. I am happy to revisit this if there's any more information it would be helpful to provide. Thank you for checking it out!

dfielding14 commented 4 years ago

@felker @morganemacleod Around the time when Morgan first raised this issue I was having the same scalar artifacts at grid boundaries when using AMR. I looked at it again recently and found that it was mysteriously no longer an issue. I didn't question why this was and assumed that some new PR had fixed the problem. Then last week it started appearing again when using the same version and an otherwise identical pgen. I realized that the only thing that had changed was I went from VL2 to RK2, so I switched back to VL2 and the problem went away. Kyle and I suspected that maybe you were using RK2, but I guess not. One other empirical finding which may or may not be relevant is that I did not see the artifacts when using 4 or less levels, but when I used 5 or more they appeared.

felker commented 4 years ago

Ok, I ran @morganemacleod's problem (Cartesian version; problem generator needed a fix in order to compile with the latest development version). Fortunately the violation occurs within the first few timesteps:

➜  bin git:(master) ✗ mpirun -np 4 ./athena -i athinput.blast_sph
### Warning in Mesh::Initialize
The number of MeshBlocks increased more than twice during initialization.
More computing power than you expected may be required.
### Warning in Mesh::Initialize
The number of MeshBlocks increased more than twice during initialization.
More computing power than you expected may be required.
### Warning in Mesh::Initialize
The number of MeshBlocks increased more than twice during initialization.
More computing power than you expected may be required.
### Warning in Mesh::Initialize
The number of MeshBlocks increased more than twice during initialization.
More computing power than you expected may be required.

Setup complete, entering main loop...

cycle=0 time=0.0000000000000000e+00 dt=4.5927932677179836e-04
cycle=1 time=4.5927932677179836e-04 dt=4.3043818542762763e-04
cycle=2 time=8.8971751219942604e-04 dt=4.0912919914355250e-04
Conserved scalar exceeds 1.0; s(0,3,3,4) = 1.0000000002925664e+00
Conserved scalar exceeds 1.0; s(0,3,5,3) = 1.0000000005112657e+00
Conserved scalar exceeds 1.0; s(0,5,5,3) = 1.0000000009720036e+00
Conserved scalar exceeds 1.0; s(0,5,3,5) = 1.0000000025912912e+00
felker commented 4 years ago

I have now set derefine_count = 5000000. If numlevel=2, no violation occurs (at least in the main flux divergence update). If numlevel=3:

Setup complete, entering main loop...

cycle=0 time=0.0000000000000000e+00 dt=9.1855865354359672e-04
cycle=1 time=9.1855865354359672e-04 dt=8.6049816189081826e-04
cycle=2 time=1.7790568154344151e-03 dt=8.1779806841855989e-04
Conserved scalar exceeds 1.0; s(0,2,2,3) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; s(0,5,2,3) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; s(0,2,3,4) = 1.0000000008763485e+00
felker commented 4 years ago

Getting closer. I stopped using MPI just to make things easier, and I put traps specific to this problem around the passive scalar variable inversion. The discrepancy starts in the first timestep (specifically during the VL2 corrector stage, but not for RK1, e.g.) for a subset of the 416 MeshBlocks. Specifically, it is detected in the time_integrator.cpp CONS2PRIM task's call of PassiveScalarConservedToPrimitive(), as opposed to the calls in the earlier PROLONG task:

In time_integrator.cpp, about to call PassiveScalarConservedToPrimitive
In W(U): gid, lid = 318,318
Primitive scalar exceeds 1.0; r(0,2,0,5) = 1.0000000000000002e+00
s_n, d, di = 9.1678411858229703e-01, 9.1678411858229691e-01, 1.0907693313300268e+00
In W(U): gid, lid = 318,318
Primitive scalar exceeds 1.0; r(0,3,0,0) = 1.0000000000000002e+00
s_n, d, di = 9.5653528244209285e-01, 9.5653528244209274e-01, 1.0454397431602724e+00
In W(U): gid, lid = 318,318
Primitive scalar exceeds 1.0; r(0,3,0,3) = 1.0000000000000002e+00
s_n, d, di = 9.5653528244209285e-01, 9.5653528244209274e-01, 1.0454397431602724e+00
In W(U): gid, lid = 318,318
Primitive scalar exceeds 1.0; r(0,3,4,4) = 1.0000000000000002e+00
s_n, d, di = 9.5653528244209285e-01, 9.5653528244209274e-01, 1.0454397431602724e+00
In W(U): gid, lid = 318,318
Primitive scalar exceeds 1.0; r(0,3,7,4) = 1.0000000000000002e+00
s_n, d, di = 9.5653528244209285e-01, 9.5653528244209274e-01, 1.0454397431602724e+00
In time_integrator.cpp, after calling PassiveScalarConservedToPrimitive
In time_integrator.cpp, about to call PassiveScalarConservedToPrimitive
In W(U): gid, lid = 319,319
Primitive scalar exceeds 1.0; r(0,2,0,1) = 1.0000000000000002e+00
s_n, d, di = 9.1678411858229703e-01, 9.1678411858229691e-01, 1.0907693313300268e+00
In W(U): gid, lid = 319,319
Primitive scalar exceeds 1.0; r(0,3,4,0) = 1.0000000000000002e+00
s_n, d, di = 9.5653528244209285e-01, 9.5653528244209274e-01, 1.0454397431602724e+00
In W(U): gid, lid = 319,319
Primitive scalar exceeds 1.0; r(0,3,7,0) = 1.0000000000000002e+00
s_n, d, di = 9.5653528244209285e-01, 9.5653528244209274e-01, 1.0454397431602724e+00
In time_integrator.cpp, after calling PassiveScalarConservedToPrimitive
morganemacleod commented 4 years ago

Awesome, thank you @felker for your digging on this!

felker commented 4 years ago

Ok, I think I am going to have to give up on this soon. Spent a few hours yesterday chasing down a red herring that caused the above deviation from r=1.0 by 1 unit in the last place (ULP). That can be avoided if you change https://github.com/PrincetonUniversity/athena-public-version/blob/90a6eb956183b257e33d149463996fe569c8ae3d/src/eos/eos_scalars.cpp#L49

to r_n = s_n/w_d. We should probably make this change, since the additional rounding operation from storing di=1.0/w_d can prevent the ratio of conserved scalar to conserved fluid density from being exact.

But, that does not fix the larger violations of the passive scalar maximum with AMR. Those are showing up between cycle=2 and cycle=3 with the provided input file, changing numlevel=3 and compiling for Cartesian coordinates, specifically:

make clean; python configure.py --prob blast_scalar --flux hllc -hdf5 --cxx g++ --nscalars=1 --hdf5_path=/Users/felker/hdf5-serial/ -h5double; make -j

The violation is relatively large:

MeshBlock: gid = 87, LogicalLocation.level = 5
Conserved scalar exceeds 1.0 in add_scalar_flux_divergence.cpp; s(0,5,2,4) = 1.0000000008766579e+00

This problem might also happen with SMR, since I disabled derefinement and commented out the following line in main.cpp:

pmesh->LoadBalancingAndAdaptiveMeshRefinement(pinput);

So all the levels and new MeshBlocks are spawned during Mesh initialization. Although doesnt rule out the possibility that the initial prolongation introduces artifacts that cause the problem in later cycles?

I could give debugging this another try if we had an even simpler reproducer, ideally a 2D problem with only numlevel=2 where a violation can be detected on the first cycle. Thoughts @tomidakn, @morganemacleod, @jmstone, @c-white ?

Also, another hint is that the violations seem to occur in or near the lower ghost cells in x1, x2, x3. Since the MeshBlocks are 4^3, and NGHOST=2, the ghost cells have at least one of the indices i, j, k equal to 0, 1, 6, 7. Here are all the violations detected in between cycle=2 and 3 in the above problem:

Conserved scalar exceeds 1.0; MeshBlock GID=87  level=5  s(0,5,2,4) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=101  level=5  s(0,5,5,4) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=113  level=5  s(0,2,3,4) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=113  level=5  s(0,5,5,3) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=117  level=5  s(0,2,4,3) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=117  level=5  s(0,3,3,3) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=131  level=5  s(0,2,4,4) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=131  level=5  s(0,5,2,3) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=135  level=5  s(0,2,3,3) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=135  level=5  s(0,3,4,3) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=153  level=5  s(0,4,3,3) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=153  level=5  s(0,5,4,3) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=157  level=5  s(0,2,5,3) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=157  level=5  s(0,5,3,4) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=171  level=5  s(0,4,4,3) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=171  level=5  s(0,5,3,3) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=175  level=5  s(0,2,2,3) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=175  level=5  s(0,5,4,4) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=187  level=5  s(0,2,2,4) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=201  level=5  s(0,2,5,4) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=214  level=5  s(0,5,2,3) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=228  level=5  s(0,5,5,3) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=240  level=5  s(0,2,3,3) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=240  level=5  s(0,5,5,4) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=244  level=5  s(0,2,4,4) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=244  level=5  s(0,3,3,4) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=258  level=5  s(0,2,4,3) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=258  level=5  s(0,5,2,4) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=262  level=5  s(0,2,3,4) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=262  level=5  s(0,3,4,4) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=280  level=5  s(0,4,3,4) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=280  level=5  s(0,5,4,4) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=284  level=5  s(0,2,5,4) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=284  level=5  s(0,5,3,3) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=298  level=5  s(0,4,4,4) = 1.0000000009698100e+00
Conserved scalar exceeds 1.0; MeshBlock GID=298  level=5  s(0,5,3,4) = 1.0000000009730714e+00
Conserved scalar exceeds 1.0; MeshBlock GID=302  level=5  s(0,2,2,4) = 1.0000000009361785e+00
Conserved scalar exceeds 1.0; MeshBlock GID=302  level=5  s(0,5,4,3) = 1.0000000008763485e+00
Conserved scalar exceeds 1.0; MeshBlock GID=314  level=5  s(0,2,2,3) = 1.0000000008766579e+00
Conserved scalar exceeds 1.0; MeshBlock GID=328  level=5  s(0,2,5,3) = 1.0000000008766579e+00
felker commented 4 years ago

athinput.blast_sph_2d.txt

Here is a 2D input file, which generates the following violations between cycle=3 and 4:

Conserved scalar exceeds 1.0; MeshBlock GID=17  level=5  s(0,0,3,5) = 1.0000000000000162e+00
Conserved scalar exceeds 1.0; MeshBlock GID=40  level=5  s(0,0,4,5) = 1.0000000000000162e+00
Conserved scalar exceeds 1.0; MeshBlock GID=47  level=5  s(0,0,3,2) = 1.0000000000000162e+00
Conserved scalar exceeds 1.0; MeshBlock GID=70  level=5  s(0,0,4,2) = 1.0000000000000162e+00
jweatson commented 3 years ago

Edit: Forgot to mention the cooling feature, and how it should be turned off for now due to a missing file, I have changed the input file to match this!

Hi I believe my issue (#390) has been merged with this one, I was asked to add my problem file and compile commands:

athinput.wr140.txt

I would suggest turning off cooling for now, since it doesn't do anything, and asks for a cooling curve and will exit if it is not available! Set cooling and dust_cooling to false in the input file!

The video in the issue page was from a previous version of my code, but current version is still exhbiting the same issue. This is with mpi enabled, so configuration is:

./configure.py --prob cwb --nscal 4 -mpi -hdf5

For GCC. Run command is:

mpirun bin/athena -i athinput.wr140.txt

The issue is only for AMR, I have managed to get extremely good results with SMR, but as this work covers elliptic orbit systems, AMR is needed to keep the processing time down.

jweatson commented 3 years ago

Hi, sorry to be a bother but I was just wondering what the progress was on the fix?

c-white commented 2 years ago

This is not a solution, but as I've been deep-diving into passive scalars lately, I thought I'd mention that I see similar artifacts at SMR refinement boundaries (see below)

The only appear in the background atmosphere, where floors are activating all the time. I noticed that my problems diminish if I add the conserved passive scalar array as input to ConservedToPrimitive, where it is modified if and when variable inversion decides to modify the conserved density. You can imagine catastrophic failures where u(IDN) is reset to a floor for some reason, but s is left unchanged and large, so some future call to PassiveScalarConservedToPrimitive will set r = s / u(IDN) to a very large value.

This is a nontrivial change (there are many variable inversions throughout the code), and it doesn't completely fix things. It does make me wonder if there is an issue with hydro floors in prolongation buffers having unintended consequences for scalars.

scalar

c-white commented 2 years ago

Does #434 help this issue at all?

jweatson commented 2 years ago

Hi Chris I have not checked, I've been extremely busy finishing my thesis I managed to get it working using SMR, and gotten good results from it. Though I would love to use AMR in the future so I will look at this more closely after I have submitted my thesis. Thank you very much! Joseph

On Mon, Apr 11, 2022 at 11:07 PM Chris White @.***> wrote:

Does #434 https://github.com/PrincetonUniversity/athena/pull/434 help this issue at all?

— Reply to this email directly, view it on GitHub https://github.com/PrincetonUniversity/athena/issues/365#issuecomment-1095621010, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4NJFDKNDI47RUOYC24N33VESPBLANCNFSM44R25LWA . You are receiving this because you commented.Message ID: @.***>

felker commented 2 years ago

I would love to personally put a monetary bounty on this issue, but I am not sure that there is a standard way to do this for non-security-related bugs?

And having left off debugging this in 2020 with https://github.com/PrincetonUniversity/athena/issues/365#issuecomment-729114935 I know how painful it might be and probably cannot offer enough of a reward to make it a decent hourly ROI 😄

c-white commented 2 years ago

Possibly related to #446.

felker commented 2 years ago

@atomsite @morganemacleod can you retest with the latest changes from #446 ?

felker commented 2 years ago

I just confirmed that @morganemacleod 's original reported problem is fixed as of e8f1ee5a5813b7edaf1ca42b69bf4108172d01f2 whereas afcd297a9b2b8e91ae06c0637911d49f8eba081d from two weeks ago still had the artifacts in the first plot.

make clean; python configure.py --prob blast_scalar --flux hllc -hdf5 --cxx g++ --nscalars=1 --hdf5_path=/usr/local -mpi -h5double --coord spherical_polar; make -j8
cd bin
mpirun -np 4 ./athena -i athinput.blast_sph

(FYI at first I had forgotten to include --coord spherical_polar, which was necessary to reproduce the problem on the older versions of the code, of course).

felker commented 2 years ago

My 2D problem reproducer from https://github.com/PrincetonUniversity/athena/issues/365#issuecomment-729114935 still triggers some s>1.0 warnings, but only at one time step. And tons of r>1.0 warnings, but this check is in the variable inversion and could be in the prolongation buffers, etc. Here is how I compiled and ran:

make clean; python configure.py --prob blast_scalar --flux hllc -hdf5 --cxx g++ --nscalars=1 --hdf5_path=/Users/felker/hdf5-serial/ -h5double; make -j8
cd bin
./athena -i athinput.blast_sph_2d time/nlim=64 time/integrator=vl2
cycle=3 time=1.3094734504890235e-03 dt=4.0424159088331131e-04
Conserved scalar exceeds 1.0; s(0,0,3,5) = 1.0000000000000162e+00
Conserved scalar exceeds 1.0; s(0,0,4,5) = 1.0000000000000162e+00
Conserved scalar exceeds 1.0; s(0,0,3,2) = 1.0000000000000162e+00
Conserved scalar exceeds 1.0; s(0,0,4,2) = 1.0000000000000162e+00

Primitive scalar exceeds 1.0; r(0,0,4,6) = 1.0000000000000002e+00
Primitive scalar exceeds 1.0; r(0,0,0,6) = 1.0000000000000002e+00
Primitive scalar exceeds 1.0; r(0,0,4,2) = 1.0000000000000002e+00
...
cycle=51 time=2.2131674380044111e-02 dt=4.5981656791154666e-04
Primitive scalar exceeds 1.0; r(0,0,2,3) = 1.0530641412451951e+00
Primitive scalar exceeds 1.0; r(0,0,2,4) = 1.0031918086024150e+00
Primitive scalar exceeds 1.0; r(0,0,2,5) = 1.0000640586284255e+00
Primitive scalar exceeds 1.0; r(0,0,3,3) = 1.0486935542292231e+00
Primitive scalar exceeds 1.0; r(0,0,3,4) = 1.0028060516123298e+00
Primitive scalar exceeds 1.0; r(0,0,3,5) = 1.0000552799189093e+00
Primitive scalar exceeds 1.0; r(0,0,4,4) = 1.0022943683438719e+00
Primitive scalar exceeds 1.0; r(0,0,4,5) = 1.0000427307192501e+00

Just put some simple checks in the code, so perhaps these arent actually meaningful:

diff --git a/src/eos/eos_scalars.cpp b/src/eos/eos_scalars.cpp
index 806db06a..824c816d 100644
--- a/src/eos/eos_scalars.cpp
+++ b/src/eos/eos_scalars.cpp
@@ -53,6 +53,9 @@ void EquationOfState::PassiveScalarConservedToPrimitive(

           //r_n = (r_n < scalar_floor_) ? scalar_floor_ : r_n;
           //s_n = r_n * d;
+          if (r_n > 1.0) {
+            std::cout << "Primitive scalar exceeds 1.0; r(0," << k << "," << j << "," << i <<") = " << r_n << std::endl;
+          }
         }
       }
     }
diff --git a/src/scalars/add_scalar_flux_divergence.cpp b/src/scalars/add_scalar_flux_divergence.cpp
index 9bd04226..569d7718 100644
--- a/src/scalars/add_scalar_flux_divergence.cpp
+++ b/src/scalars/add_scalar_flux_divergence.cpp
@@ -91,6 +91,10 @@ void PassiveScalars::AddFluxDivergence(const Real wght, AthenaArray<Real> &s_out
 #pragma omp simd
         for (int i=is; i<=ie; ++i) {
           s_out(n,k,j,i) -= wght*dflx(n,i)/vol(i);
+          if (s_out(n,k,j,i) > 1.0) {
+            std::cout << "Conserved scalar exceeds 1.0; s(0," << k << "," << j << "," << i <<") = "
+                      << s_out(n,k,j,i) << std::endl;
+          }
         }
       }
     }