Closed EmiliaJarochowska closed 1 month ago
So the first thing I will try after my vacation is to run those combinations on the Use_solve_ivp_without_py-pde_wrapper
branch, I guess.
Sorry, I realized today I should have run them, hence the issue (so I don't forget anymore). I'll try to run them but I want to finish something else first.
I tried running it for (Phi0, Phi00) = (0.8, 0.5) but got the following:
Traceback (most recent call last):
File "/Users/emilia/Documents/Documents - UU101746/GitHub/Integrating-diagenetic-equations-using-Python/ScenarioA.py", line 5, in <module>
import matplotlib.pyplot as plt
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/matplotlib/__init__.py", line 210, in <module>
_check_versions()
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/matplotlib/__init__.py", line 204, in _check_versions
module = importlib.import_module(modname)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/kiwisolver/__init__.py", line 8, in <module>
from ._cext import (
ImportError: dlopen(/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/kiwisolver/_cext.cpython-310-darwin.so, 0x0002): tried: '/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/kiwisolver/_cext.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/kiwisolver/_cext.cpython-310-darwin.so' (no such file), '/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/kiwisolver/_cext.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
@NiklasHohmann could you try?
@EmiliaJarochowska running from the current main branch (this commit) get the same RuntimeError (Time step ...)
Thanks for checking @NiklasHohmann . This discussion was actually about the Use_solve_ivp_without_py-pde_wrapper
branch, so if you get it on main
, than this is a different issue. Can you also try Use_solve_ivp_without_py-pde_wrapper
for a good measure?
I just tried on Linux, but pipenv
doesn't work because of a problem with OpenSSL, I need to sit down and update a bunch of things so it will take longer. TBC.
@EmiliaJarochowska tried with the Use_solve_ivp_without_py-pde_wrapper
branch and got an installation error with pipenv
linked to Python versioning issues. I'll try again once this is resolved.
I got it running, after replacing ._apply_operator
by .apply_operator
, so dropping the underscore. Pushed it, apparently the underscore had been deprecated.
I got successful runs for (Phi0, Phi00) = (0.9, 0.6) (0.9, 0.7) (0.9, 0.8) Here is a result for (0.9, 0.8).
However, I realised that my branch had not been updated, so I was effectively on a commit from January 24 last year. That did not have the bottom boundary conditions effectuated properly, so that commit still had
self.bc_CA = [{"value": CA0}, {"curvature" : 0}]
self.bc_CC = [{"value": CC0}, {"curvature": 0}]
instead of
self.bc_CA = [{"value": CA0}, {"value": 1e99}]
self.bc_CC = [{"value": CC0}, {"value": 1e99}]
and e.g.
CA_grad_back = CA._apply_operator("grad_back", self.bc_CA)
CA_grad_forw = CA._apply_operator("grad_forw", self.bc_CA)
CA_grad = ScalarField(self.Depths, np.where(U.data>0, CA_grad_back.data, \
CA_grad_forw.data))
instead of
CA_grad_back = CA._apply_operator("grad_back", self.bc_CA)
CA_grad = CA_grad_back
(and likewise for CC).
Apparently, although incorrect, that provided some extra stability that we lost.
It seems the constant porosity diffusion coefficient has not yet been implemented in this branch, i.e. the Use_solve_ivp_without_py-pde_wrapper
branch.
Thanks for checking this. These differences show how much you have done to improve this model and the implementation since we last discussed this branch. The constant porosity diffusion coefficient was supposed to provide stability but apparently it isn't essential.
In other news, Pipfile
in this branch had not been updated either. So the problem I reported earlier disappeared after upgrading Pipfile
. So I can run this branch on macos again (not on linux but that's a problem on my side).
It runs for me now too, although I get an underflow warning with the default parameters for scenario A
CA_grad_back = CA._apply_operator("grad_back", self.bc_CA) CA_grad_forw = CA._apply_operator("grad_forw", self.bc_CA) CA_grad = ScalarField(self.Depths, np.where(U.data>0, CA_grad_back.data, \ CA_grad_forw.data))
instead of
CA_grad_back = CA._apply_operator("grad_back", self.bc_CA) CA_grad = CA_grad_back
(and likewise for CC).
I am digesting these results....
It seems accommodating for switches in U, i.e. switching from forwards to backwards differencing and vice versa, depending on the sign of U provides for extra stability, needed when there is some "gap" between Phi0 (surface value) and PhiIni.
Well, this switch just means that you always apply the "upwind solution" and we kwow from the literature that that provides extra stability, so that shouldn't be a surprise.
We removed the switch to make sure that our dummy bottom boundary condition is never enforced, following L'Heureux. But it has a downside.
Fortunately, py-pde, since the end of last year supplies for boundary conditions depending on the state. This may be a way out. However, suppose we want to go for that extra stability, and always use the upwind solution, which boils down to forward differencing for a negative U, i.e. for an upward advective flow, what would be our bottom boundary condition? Because for forward differencing one would need a bottom boundary condition instead of a surface boundary condition. Well, in fact this is only an issue if U is negative at the bottom, U may change sign along depth.
Perhaps the former "dummy" bottom boundary conditions for CA and CC were not so weird, i.e. that these compositions should have zero curvature (i.e. second derivative) along depth:
self.bc_CA = [{"value": CA0}, {"curvature" : 0}]
self.bc_CC = [{"value": CC0}, {"curvature": 0}]
On the other hand, there is no physical reasoning to support it, afaik. What we do see, for any run that I can remember is that the steady-state solutions show flat profiles at deep levels, i.e. even the first derivatives are zero.
Perhaps the easiest way to achieve our goals is to revert this commit.
for any run that I can remember is that the steady-state solutions show flat profiles at deep levels, i.e. even the first derivatives are zero.
But U<0 could still happen before a steady-state solution is reached? Which we wouldn't see after the run is completed (unless we run for very short times)? And possibly this could lead to derailing, as in the main
branch at the moment? Could this be the reason for not reaching solutions for certain combinations of Phi0 and PhiIni?
In the series of runs I made using the main
branch in December, I didn't switch on tracking U at the bottom. Perhaps I need to re-run some combinations to see whether/how often U<0 happens.
And possibly this could lead to derailing, as in the
main
branch at the moment? Could this be the reason for not reaching solutions for certain combinations of Phi0 and PhiIni?
That is indeed my best guess in explaining why certain integrations derail; when U turns negative, the stable upwind solution - by taking the backwards difference for the gradient -turns into an unstable downwind solution.
We discussed running these cases, i.e. Phi0, Phi00) = {(0.9, 0.5) (0.9, 0.6) (0.9, 0.7) (0.9, 0.8) (0.8, 0.5) (0.8, 0.6) (0.8, 0.7) (0.7, 0.5)} for a very short time to see if we would capture how U changes before they derail. So I ran them for 0.1 * Tstar. But they all derailed immediately. And U at bottom is only written into a file if the solution is completed, so we didn't get any record of whether it was guilty of the error.
However, I ran (0.8, 0.8) for that same short interval and got a very weird result for porosity. This is before reaching steady state (that was the point), but it does look like a likely source of potential instability. Final_distributions.pdf. But this had U>0 throughout. It's very handy to be able to track U.
So the way U is currently exported we can't tell if it's behind the error, because if the error occurs, it won't be saved. However, I will still investigate if we encounter U<0 in the runs that complete.
Okay, which branch are you currently running?
I guess main
. What you could do is to revert the changes from this commit manually into LHeureux_model.py
.
(disregard the small change in ScenarioA.py, that is also part of that commit).
And preferably use the .apply_operator
attribute instead of the deprecated ._apply_operator
.
If you are really confident, you could of course do some reverted cherry-picking, but not sure if such a git command exists.
That will most likely give you completed runs, because this should add stability to the integration of the compositions.
Thanks! My runs of the main
branch yesterday were what we discussed in the morning: finding out if the error is related to U reversing direction. But I didn't consider that U wouldn't be saved unless integration completes.
Today I made some runs with high Phi0
and PhiIni
and tracking U at the bottom turned on, over 30 T* to see if U ever gets negative. I will see if I can cherry-pick a version that completes the runs that gave us the error.
Okay. If you got completed runs with high porosity from the current main
branch, I guess that means that you set Phi0=PhiIni
, since a discrepancy between these two values seems to induce an instability.
We could consider adding a feature to make sure that U at the bottom is also saved before integration completes.
Yes. Sorry that was unclear. I ran (Phi0, PhiIni) = {(0.9, 0.9) (0.8, 0.8) (0.8, 0.9) (0.7, 0.8)}, just working on plotting them in an organized manner. Saving U during the run would be just for the purpose of finding the source of error so let me check this first.
Interesting. So discrepancies between Phi0
and PhiIni
with the opposite sign, i.e. with Phi0
<PhiIni
do not seem to induce instabilities.
How can we explain that?
My hypothesis for now is that Phi0
> PhiIni
enforces upwards advection at the start of the integration. Of course, compaction should with time smoothen out the difference between these two values, but compaction acting slowly is an essential point of this model. Also at the surface, the effective stress acting upon the sediment (and thus the compaction) are the smallest.
Of course we have the same absolute difference when Phi0
< PhiIni
but in that case U should be positive. This is why I suspect the sign makes a difference.
I couldn't register it though because these runs derail so fast that only the state at the initial step is written into the hdf5.
Under hydrostatic equilibrium, the porosity profile should be exponential (see e.g. part 2.2.2 in Chapter 5 of Jourabchi's PhD thesis that was the basis of this part of Ivan's model), so e.g. for a non-oscillatory steady-state this would be the solution for the porosity profile. So tweaking the parameters of Eq. 24 could give us an insight into how quickly the porosity re-quilibrates, but we've seen the stability is very sensitive to these parameters
I have to stop here for now but the runs of the main
branch show highly positive U:
I do wonder what this sudden jump in U is though.
Interesting. Perhaps some time you can try e.g. (Phi0, PhiIni) = (0.9, 0.8) which will complete without error when the revert of the modifications, as discussed above, has been applied.
You know how to send me down a rabbit hole 😉 I've learned that it is possible to make git diff
for two different files on different branches but not how to do this at a specific commit. So I hoped my semi-manual attempt didn't introduce any errors, but I did not manage to resolve this error:
Traceback (most recent call last):
File "/Users/emilia/Documents/Documents - UU101746/GitHub/Integrating-diagenetic-equations-using-Python/marlpde/Evolve_scenario.py", line 143, in <module>
integrate_equations(**all_kwargs)
File "/Users/emilia/Documents/Documents - UU101746/GitHub/Integrating-diagenetic-equations-using-Python/marlpde/Evolve_scenario.py", line 96, in integrate_equations
sol, info = eq.solve(state, t_range=End_time, dt=dt, \
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/pdes/base.py", line 604, in solve
final_state = controller.run(state, dt)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/controller.py", line 326, in run
self._run_single(state, dt)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/controller.py", line 176, in _run_single
stepper = self.solver.make_stepper(state=state, dt=dt)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/base.py", line 602, in make_stepper
adaptive_stepper = self._make_adaptive_stepper(state)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/explicit.py", line 392, in _make_adaptive_stepper
return super()._make_adaptive_stepper(state)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/base.py", line 495, in _make_adaptive_stepper
single_step_error = self._make_single_step_error_estimate(state)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/explicit.py", line 368, in _make_single_step_error_estimate
return self._make_single_step_error_estimate_rkf(state)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/explicit.py", line 289, in _make_single_step_error_estimate_rkf
rhs = self._make_pde_rhs(state, backend=self.backend)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/solvers/base.py", line 177, in _make_pde_rhs
rhs = self.pde.make_pde_rhs(state, backend=backend) # type: ignore
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/pdes/base.py", line 278, in make_pde_rhs
rhs = self._make_pde_rhs_numba_cached(state, **kwargs)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/pdes/base.py", line 235, in _make_pde_rhs_numba_cached
self.check_rhs_consistency(state, rhs_numba=rhs, **kwargs)
File "/Users/emilia/.local/share/virtualenvs/Integrating-diagenetic-equations-using-Pyt-hTapD1hM/lib/python3.10/site-packages/pde/pdes/base.py", line 181, in check_rhs_consistency
self._logger.warning(
AttributeError: 'LMAHeureuxPorosityDiff' object has no attribute '_logger'
I'm sorry I couldn't make it work. I'll dig some more to see how to resolve this but maybe you have a suggestion?
Ah too bad.
I couldn't find the source of error immediately.
Which is really why does evolution defined by evolution_rate
vs. _make_pde_rhs_numba
give different results?
You seem to have implemented things correctly.
Let me fix this tomorrow.
Okay, indentation lines 353-358 was incorrect.
You can git pull
.
Btw, since we introduced a change in algorithm, it is not surprising that a regression tests fails.
In this case test_integration_Scenario_A
.
The other ones still pass, which is interesting.
Anyway, we cannot merge with main
. If we want this kind of stable integration in main
, we would first have to adjust that regression test.
I'm sorry for this stupid error 🫣
Let's take a look at the results first. I ran it with the values currently in the repo, but got an odd non-physical result: well, I think it is non-physical, because both carbonate minerals on the surface are gone. Zero. Since we have a steady supply of fresh sediment of which 90% is CA and AR, it is not possible that they dissolve so fast. Possibly this is not steady-state, I have to check. L'Heureux has a 0-0 state (Fig. 5a and 7) but that's at the bottom of the system. I re-run with (0.9, 0.5) and that reached A-0 at the bottom: But it's fast 😅
Np, such coding errors are very hard to prevent.
Happy to see that no integration now derails! So we got ourselves some extra stability. At the cost of a bottom boundary condition for calcite and aragonite.
But indeed that first plot seems to show an unphysical result. Weird.
Rerun with a different solver (0.9, 0.9)
Let's take a look at the results first. I ran it with the values currently in the repo, but got an odd non-physical result:
So that would be (Phi0, Phi00) = (0.8, 0.7), right? And not (0.9, 0.9)?
Yes, sorry, you're correct.
The problem - of mineral compositions too low near the surface, i.e. the fraction of insoluble terrigenous materials close to 1 - for (Phi0, Phi00) = (0.8, 0.7) persists with the new crank-nicolson
solver from py-pde
.
To reproduce (but why would you?):
sol, info = eq.solve(state, t_range=End_time, dt=dt, \
solver="crank-nicolson",\
tracker=["progress", \
storage.tracker(kwargs["progress_tracker_interval"]),\
live_plots, data_tracker], \
backend=kwargs["backend"], ret_info=kwargs["retinfo"],\
)
I am also wondering if gaps between initial and surface porosity are not in fact pathological cases, which do not occur in nature. Implementing #17 and #18 might give us extra stability.
I think we are satisfied with runs at (0.8, 0.8) so this issue can also be archived. Please let me know if you agree.
Wouldn't that severely limit initial conditions?
We have not checked if this issue persists upon your most recent changes so we can check that if you want. But if runs at (0.8, 0.8) illustrate the inadequacy of the model sufficiently, we don't have to put work into stabilizing runs with other arguments.
We don't want to fix something we don't want solutions for
For the following combinations of (Phi0, Phi00): (0.9, 0.5) (0.9, 0.6) (0.9, 0.7) (0.9, 0.8) (0.8, 0.5) (0.8, 0.6) (0.8, 0.7)
(0.7, 0.5) The integration stops with
RuntimeError: Time step below 1e-10
. Possibly explicit solvers are not suitable for such combinations. In that case this would be addressed by #33. But we don't know so this needs investigation. I don't know how though, other than changing the solver.