Closed trg818 closed 2 years ago
Dear Thomas,
Are you interested in helping to develop the code? We're very happy to accept proposed improvements from the community! If not, I can certainly have a look at this.
There are lots of issues like this in thermodynamics, and each EoS has different hard bounds on equations of state.
Bob
On Tue, 4 Sep 2018, 17:08 Thomas Ruedas, notifications@github.com wrote:
Dear Burnman developers, I have been trying to fit various datasets of mineral physics data (EoS and cp) with a script based on example_fit_eos.py and frequently encounter problems of this type: E_th = 3. n constants.gas_constant T debye_fn_cheb(debye_T / T) File "/Users/ruedas/chemrock/burnman/burnman/eos/debye.py", line 94, in debye_fn_cheb assert(x > 0.0) # check for invalid x AssertionError I have determined that at least for one my test examples, the cause is that the Debye temperature is allowed to assume the value 0 in a call of debye_fn_cheb via thermal_energy. I think it should be allowed to impose constraints on the fitting parameters in order to prevent the routine from crashing. As far as I can tell, the use of scipy.optimize.least_squares, which seems to offer that feature in its newest version, is not implemented in the current master. Is it planned/worth considering in the future?
Generally, it would be desirable to have functions return gracefully even in the case of failure in order to permit adjusting parameters or skipping results while continuing the executation of a more comprehensive script.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/geodynamics/burnman/issues/328, or mute the thread https://github.com/notifications/unsubscribe-auth/AHxICT_mP7B6eqXIegdr3ObOSzfjnt_bks5uXqVrgaJpZM4WZTCJ .
Hi Bob,
in principle, I'd be happy to contribute something, but I have to say that my knowledge of Python is still very rudimentary, I'm not an expert on optimization methods, and I could only work on this as a side project; in short, I'm rather the stupid end user.
But to be more specific about the point I raised and determine how to proceed: what would have to be done? I get the impression from looking at nonlinear_fitting.py
that you have implemented a nonlinear fitting algorithm from scratch rather than using whatever tools may have been available from Scipy or other libraries. Why is that? Are the routines provided by Scipy not up to the task?
Thomas
An afterthought somewhat related to the issue of the constraints: is there any tool in Burnman that derives a useful initial guess for a fit from the data to be fitted? I didn't have the impression that this is the case and so cooked something up for BM3/MGD myself, which I could certainly discuss in more detail and share if there is need or interest.
@bobmyhill I have experimented a bit with some of the basic functions from the scipy.optimize toolkit for the purpose of fitting experimental data by replacing the call to nonlinear_fitting.nonlinear_least_squares_fit
with one of them. In principle, I found that it can be made to work somehow, and in some respects better than Burnman's native function, but often only marginally so. At any rate I get the impression that putting bounds on the fit often is either not possible with the method chosen or it doesn't work well. For now, I ended up using the function curve_fit
from scipy.optimize, but I had also tried out minimize
with some L2 norm error function to be minimized.
Contrary to the data in the example from example_fit_eos.py
, I did not try to fit everything in one move, and I doubt that it is a good idea to do that. Focussing for now on 3rd-order Birch-Murnaghan/Mie-Grüneisen-Debye, I rather try to determine V_0
first if there are data to do that and then fit K_0
and Kprime_0
together with the isothermal BM3 and V_0
fixed; if V_0
could not be fixed beforehand, it is fitted together with the other two parameters with BM3. This is generally the easy part and works well with both curve_fit
and Burnman's native function. Only then do I fit the thermal equation of state with grueneisen_0
, q_0
, and Debye_0
as the remaining free parameters, keeping V_0
, K_0
, and Kprime_0
fixed. Sometimes imposed bounds specified in curve_fit
do actually have the effect of limiting the parameter space in a useful way, but often the effect is that the starting guess is accepted unchanged. In other cases, or without using bounds, the scipy functions also try to use parameter values they shouldn't such as negative Debye temperatures, resulting in a crash; this is very annoying, and an option to yield a return value that permits the continuation of the program would be useful and highly appreciated. At least, I have been able in some instances to produce a reasonable fit to experimental data for the heat capacity, which I have never been able to do with Burnman's native function. I would like to know to which extent you have been using Burnman for fitting datasets and derive thermoelastic parameters and what sort of data you have used.
In summary, I think it would be good if you could look at the problem as well. To give you an idea of what I have tried out most, I have edited eos_fitting.py
as follows. In fit_pTP_data
, I have defined the functions
``
def ceos_intf(p_i,K0,Kp,V0=1.):
if V0 != 1.:
mineral.params['V_0']=V0
mineral.params['K_0']=K0
mineral.params['Kprime_0']=Kp
V_i=np.empty_like(p_i)
for i,p in enumerate(p_i):
mineral.set_state(p,298.)
V_i[i]=mineral.V
return V_i
def theos_intf(T_i,gamma0,q0,thD0):
mineral.params['grueneisen_0']=gamma0
mineral.params['q_0']=q0
mineral.params['Debye_0']=thD0
V_i=np.empty_like(T_i)
for i,T in enumerate(T_i):
mineral.set_state(1e5,T)
V_i[i]=mineral.V
return V_i
def cpfit_intf(T_i,gamma0,q0,thD0):
mineral.params['grueneisen_0']=gamma0
mineral.params['q_0']=q0
mineral.params['Debye_0']=thD0
print(gamma0,q0,thD0)
cp_i=np.empty_like(T_i)
for i,T in enumerate(T_i):
mineral.set_state(1e5,T)
cp_i[i]=mineral.C_p
return cp_i
``
and then commented out the call of nonlinear_fitting.nonlinear_least_squares_fit
and put in
``
if 'V' in model.flags:
if 'K_0' in model.fit_params:
# cold isothermal fit
fitres,pcov=curve_fit(ceos_intf,model.data.T[0],model.data.T[2],
guessed_params)#,bounds=([0.,5./3.],np.inf))
if len(fitres) == 2:
mineral.params['K_0']=fitres[0]
mineral.params['Kprime_0']=fitres[1]
else:
mineral.params['V_0']=fitres[0]
mineral.params['K_0']=fitres[1]
mineral.params['Kprime_0']=fitres[2]
else:
# thermal fit with surface p data
fitres,pcov=curve_fit(theos_intf,model.data.T[1],
model.data.T[2],guessed_params,
bounds=([0.,0.,100.],[5.,6.,1500.]))
mineral.params['grueneisen_0']=fitres[0]
mineral.params['q_0']=fitres[1]
mineral.params['Debye_0']=fitres[2]
elif 'C_p' in model.flags:
fitres,pcov=curve_fit(cpfit_intf,model.data.T[1],
model.data.T[2],guessed_params,
bounds=([0.,0.,100.],[5.,6.,1500.]))#np.inf))
mineral.params['grueneisen_0']=fitres[0]
mineral.params['q_0']=fitres[1]
mineral.params['Debye_0']=fitres[2]
``
there. Obviously, at the top of the file we need from scipy.optimize import curve_fit,minimize
.
Thank you for playing with some ideas!
A few comments/suggestions/replies: The intention of the current functions in nonlinear_least_squares_fit is to provide a generalised framework for "optimal" (in the sense of maximum likelihood) nonlinear least squares fitting. There should be no reference to any thermodynamics in this file.
Contrary to the data in the example from example_fit_eos.py, I did not try to fit everything in one move, and I doubt that it is a good idea to do that.
Could you clarify this statement? example_fit_eos.py gives several examples of fitting and fixing different parameters.
I have experimented a bit with some of the basic functions from the scipy.optimize toolkit for the purpose of fitting experimental data by replacing the call to nonlinear_fitting.nonlinear_least_squares_fit with one of them. I have been able in some instances to produce a reasonable fit to experimental data for the heat capacity, which I have never been able to do with Burnman's native function.
If one of your test cases fails (for fitting Debye temperature, for example), then that problem is either very non-linear, the starting guesses are a long way from the solution, or the data simply don't constrain the chosen parameters very well.
In principle, I found that it can be made to work somehow, and in some respects better than Burnman's native function, but often only marginally so.
How do you mean, better? More robust? Lower misfit? Do the tests in test_fitting.py still pass with your changes? Incidentally, I just added the nonlinear test in Neri et al. (1990) to the testing suite here: https://github.com/geodynamics/burnman/pull/330 ... any changes will also need to be checked against this test case.
Potential solutions (I'm open to alternatives!):
In order for me to suggest a fix / fixes to your problems, it would be good for you to push a branch with your work on this problem to your own forked burnman repository on github, and then post a link to the address on this comment thread.
Thanks for your feedback, and sorry for the delay with replying (EPSC is approaching, and there's stuff to do...)
Could you clarify this statement? example_fit_eos.py gives several examples of fitting and fixing different parameters.
I was mostly thinking of the fit to the periclase data, where fits were made for the high-pT volume data and then also with the enthalpy data included, and where arrays like ['V_0', 'K_0', 'Kprime_0', 'grueneisen_0', 'q_0']
were passed to burnman.eos_fitting.fit_PTV_data
, for example. I understand that the example is only for demonstration, and I guess it's a bit of a philosophical question how to proceed. My argument for the procedure I outlined in my previous post is that I try to focus on the data subset that presumably captures a certain parameter best. So, for V0 one can take advantage of the many measurements taken under STP conditions without a multi-anvil cell or another sort of bulky apparatus around it and pinpoint that parameter with high accuracy independently from the others. That removes one free parameter from subsequent steps. Next, K0 and K' are fitted together using room T compression data only, because as these two are by definition room T parameters, compression data at room T should capture them best. With these two parameters then fixed as well, I have only three degrees of freedom for the thermal fit, in which I must fit the remaining three parameters together, as they are not separable physically. The general point is to use as few free parameters as possible for a given problem.
If one of your test cases fails (for fitting Debye temperature, for example), then that problem is either very non-linear, the starting guesses are a long way from the solution, or the data simply don't constrain the chosen parameters very well.
Considering how simplistic the assumptions underlying the Debye theory of heat capacity are, they describe cp better than they should, but it seems to be true that the match is often not perfect - at least not to the same degree as BM3 can be fitted to good isothermal compression data, where the points may line up on the curve like pearls on a string. The problem is certainly quite non-linear, considering how the Debye model looks like and what turns and twists are to be taken to connect a cp measurement with gamma0, q0, and the Debye T. Judging from "fits" I made by trial and error with visual inspection as the sole judgement, however, I would think that my starting guesses are not too bad.
How do you mean, better? More robust? Lower misfit? Do the tests in test_fitting.py still pass with your changes?
Mostly I simply meant that it didn't result in crashes quite as often and that it did return a result at all for specific heat dataset fits. I haven't looked at the tests in test_fitting.py yet.
In general, a good way to deal with hard non-negativity / non-physical bounds which are only hit at some intermediate iterations is to use the method of Lagrangian multipliers. An alternative is to create "dummy" data, which penalise the misfit function if a parameter lies far from a certain value.
I had actually briefly tried the dummy data method, but it didn't help either (not that I had much hope, given the nonlinearity of the problem). I might give the Lagrangian multipliers a try at some point, though.
Before pushing a branch of my own with my attempts, I guess I'll try around a bit more and then come back to you.
Addressed by #468.
Dear Burnman developers, I have been trying to fit various datasets of mineral physics data (EoS and cp) with a script based on
example_fit_eos.py
and frequently encounter problems of this type:E_th = 3. * n * constants.gas_constant * T * debye_fn_cheb(debye_T / T) File "/Users/ruedas/chemrock/burnman/burnman/eos/debye.py", line 94, in debye_fn_cheb assert(x > 0.0) # check for invalid x AssertionError
I have determined that at least for one my test examples, the cause is that the Debye temperature is allowed to assume the value 0 in a call ofdebye_fn_cheb
viathermal_energy
. I think it should be allowed to impose constraints on the fitting parameters in order to prevent the routine from crashing. As far as I can tell, the use of scipy.optimize.least_squares, which seems to offer that feature in its newest version, is not implemented in the current master. Is it planned/worth considering in the future?Generally, it would be desirable to have functions return gracefully even in the case of failure in order to permit adjusting parameters or skipping results while continuing the executation of a more comprehensive script.