Closed pinkfloydisch closed 2 years ago
I can help you, but before I do, we need to talk about β. If I understand correctly, this is a parameter that abruptly changes three times in the integration. This cannot be implemented literally because discontinuities are poison for the solver. It is possible to make a close approximation using a sharp sigmoid though. However, in many real applications, you want things to behave more continuously anyway, e.g., a linear increase. Can you please describe what you ideally want to happen here and why?
Another thing that would be helpful are values for all the parameters, so I can make a test run.
Finally, please note that it’s not nice to post the same request in several places. You don’t want people to invest their time helping you with a problem that somebody else already solved.
Hi, thanks a lot for your quick answer, first of all, you are totally right, but this problem is very challenging for me and I was confused about it, and I was not sure how will be the feedback of my request. and around 2,3 weeks I have problem in this point in my thesis and it is very stressful. I will keep in my mind. Thanks for your friendly advice.
beta is the transmission rate, and it is continuous .before I made it as a logistic function, but in that case, always I will have a decrease, I need it free because transmission can change during the time. just I want to measure it in different stages because transmission rate will change in different stages. so I need 4 betas but not forced to have in this form. so if you have any suggestion I am completely welcome to see it. I am using 'lmfit' library for minimization. parameters will be estimated in minimization, for example: beta0 = 0.6 , beta1 = 0.5 , beta2 = 0.3 , beta3 = 0.1, kappa = 1/3 , gamma = 1/10, delta = 1 , mu = 0.02 , E0 = 280 , I0 = 114 , R0 = 16 , D0 = 0 , S0 = 80000000 - 280 - 114 - 16 - 0
since 'I' will have two forms, with and without delay, so in delay form, this is my historical function I0 np.exp((-np.log(0.1)/tau )(t-t0) the rest will be the same as before.
beta is the transmission rate, and it is continuous .before I made it as a logistic function, but in that case, always I will have a decrease, I need it free because transmission can change during the time. just I want to measure it in different stages because transmission rate will change in different stages. so I need 4 betas but not forced to have in this form. so if you have any suggestion I am completely welcome to see it.
This is confusing. First you say it’s continuous; then you say it is subject to stages. There are several ways to implement an input, so when in doubt assume there are no limitations (as long as you stay continuous). Can you mathematically describe what your constraints on β are and what you ideally want? For example, I could make β a cubic (or higher-order) polynomial whose coefficients you can control. I can also make it approximate a step function as you use in your sketch.
since 'I' will have two forms, with and without delay, so in delay form, this is my historical function I0 np.exp((-np.log(0.1)/tau )(t-t0) the rest will be the same as before.
I fail to understand this. Do you want to say that the initial condition for I is not constant, but described by that function?
beta is the infectious rate over in my SEIRD model, I am working on duration of 64 days, from first of march until 3rd of may. in comparison, gamma and kappa are parameters but fixed, or mu is the probability of death, to be estimated but in 64 days I will have one mu. but the beta is changing over this 64 days, and finding only one beta not make sense, for example before lockdown, (day 16) 2 weeks after it, and again around 4 weeks to see the effect of it, and finally from day 52 to the end, when ease of quarantine happened in Germany. Now I have to estimate these 4 betas, for example in my parameters estimation in lmfit library: fit_params = Parameters() fit_params.add('mu', value=mu, min=0.01, max=0.05) fit_params.add('E0', value=E0, min=130, max=700) fit_params.add('delta', value=0.2, min=0.05, max=0.5) fit_params.add('beta0', value=0.5,min=0.05) fit_params.add('beta1', value=0.4,min=0.05) fit_params.add('beta2', value=0.1,min=0.05) fit_params.add('beta3', value=0.1,min=0.05)
and the result of these parameters will go to an array : u_vector = np.array([beta0, beta1, beta2, beta3 , delta, mu, E0, R0, tau,I0]) and I will use the squared norm(L2 norm) of this vector as a regularization term for me cost function. to avoid local minima or overfitting.
'I' which is I(t) is my infected compartment, part of it (1-mu) gamma I will go to the recovered compartment, and part of it with delay will go to the death compartment: mu gamma I(t- tau)
this is my objective function that I want to minimize : J = sum((I(t) - confirmed_data)2) + sum((D(t) - death_data)2) + np.linalg.norm(u_vector, ord=2)**2
here I(t) and D(t) are the solutions of Model, which is calling Delay differential equation solver results.
Okay, here we go:
from jitcdde import jitcdde,y,t
from jitcxde_common import conditional
from chspy import CubicHermiteSpline
from symengine import symbols,Rational
import numpy as np
τ = 11.5
κ = Rational(1,3)
γ = Rational(1,10)
t0 = 0
N = 80000000
E0 = 280
I0 = 114
R0 = 16
D0 = 0
S0 = N - E0 - I0 - R0 - D0
days = 64
times = np.arange(t0,t0+days+1,1)
control_pars = [β0,β1,β2,β3,μ,E0,δ] = symbols("beta0 beta1 beta2 beta3 mu E0 delta")
S,E,I,R,D = [y(i) for i in range(5)]
labels = ["S","E","I","R","D"]
Id = y(2,t-τ)
# Somewhat ugly, but gets hardcoded and can be easily adjusted using control parameters:
width = 0.5 # half a day
β = conditional(t,22,
conditional(t,52,β3,β2,width),
conditional(t,16,β1,β0,width),
width
)
SEIRD = {
S: -β*S*I/N ,
E: β*S*I/N - κ*E ,
I: κ*E - (1-μ)*γ*I - μ*γ*Id,
R: (1-μ)*γ*I ,
D: μ*γ*Id,
}
# Defining the initial past as a spline for easy re-use:
initial_past = CubicHermiteSpline(n=len(SEIRD))
initial_past.from_function(
[ S0, E0, I0*10**((t-t0)/τ), R0, D0 ],
times_of_interest = [t0-τ,t0],
max_anchors=10, tol=3
)
# By defining the DDE outside of run_model we need to compile only once:
DDE = jitcdde( SEIRD, control_pars=control_pars, verbose=False )
def run_model(*par_values):
DDE.purge_past()
DDE.add_past_points(initial_past)
DDE.set_parameters(par_values)
DDE.adjust_diff()
return np.vstack([ DDE.integrate(time) for time in times ])
solutions = run_model( 0.6, 0.5, 0.3, 0.1, 0.02, 280, 1 )
pre_times = np.linspace(t0-τ,t0,20)
histories = initial_past.get_state(pre_times)
from matplotlib.pyplot import subplots
fig,axes = subplots()
for label,solution,history in zip(labels,solutions.T,histories.T):
axes.plot(
np.hstack((pre_times,times)),
np.hstack((history,solution)),
label=label
)
axes.legend()
axes.set_yscale("log")
fig.savefig("results.pdf")
Please note:
pip3 install git+git://github.com/neurophysik/CHSPy
or similar.AttributeError Traceback (most recent call last)
Did you install the latest version of CHSPy?
besides last night I worked a lot to prepare my code as much as possible clear to share with you. in my code, you can see I can solve the delay differential equations with 'ddeint' but it is very very sensible and easily with a little bit change in any bounds or method of fitting I will have very wrong and ugly results. in this code, I used 'Nelder-Mead' method for minimization. I explained yesterday to you, but as it is a little bit complex, so I thought since you are investing time and energy to help me, it is better to provide as much as possible my code ready for you to see what and how I am working on. I invited you as a contributor. and since I will use this code and model for my master thesis, I kept it private. I am doing my writing my master thesis in Germany, and I was a worry to make it public before my presentation, if anyone else uses my code, then probably there will be problems like plagiarism for me.
No matter how clean your code, it’s considerably easier if you clearly state your requirements. Anyway, I am familiar with optimisers and the above version is optimised for that – by making run_model
as fast as possible. Connecting the components together is really up to you.
I was a worry to make it public before my presentation, if anyone else uses my code, then probably there will be problems like plagiarism for me.
While there are other reasons not to share your code, plagiarism (by you) is not one of them. Plagiarism is passing somebody else’s work as your own. You cannot do this accidentally and particularly not by somebody else using your code. Even false suspicions of plagiarism should be easy to dispel since you would have a clear timestamp. What would be plagiarism is if you would pass my code as yours without attribution.
Now it is running :)
First of all thanks a lot, it is working but, it has huge distance with my current code, can I share the rest of my codes here as well?
It’s your decision what you share. Worst that can happen is that I don’t want to look into it. Mind that I can primarily help you with JiTCDDE-related problems. These should be sufficiently distinct from the rest. So, if you have a problem, it’s best to first determine whether it’s the integration or something else. If yes, then try to boil your problem down to that.
I don't know how to implement JiTCDDE on my code, to compare the results, I am not sure that am I writing my previous code wrongly, or 'ddeint' is not enough good for my minimization or both. I will share, the rest of my code as well: here even tau is a parameter to be estimated as well, and I will share the link of data as well, I am using a column of Confirmed for (delta*(I+R)+D - df_i) and column Death for (D - df_d) in the objective function (models). I have several time, t , tt ... and in the I am worry to write and calling them wrongly, especially in dde part and integration for part_1 and part_2 . your code looks very nice and of course more reliable but I don't know how to implement it on my current code. or at least how can I correct my current code. I am sure your knowledge and experience will be very helpful
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import lmfit
from lmfit.lineshapes import gaussian, lorentzian
from lmfit import minimize, Parameters, Parameter, report_fit, Minimizer
import datetime
import os
from math import sqrt
from ddeint import ddeint
import matplotlib
import matplotlib.dates as mdates
from scipy import integrate
def SEIRD(y, t, N, beta, kappa, gamma, mu, tau):
S, E, I, R, D = y(t)
Sd, Ed, Id, Rd, Dd = y(t-tau)
dSdt = - beta(t) * S * I/N
dEdt = beta(t) * S * I/N - kappa * E
dIdt = kappa * E - (1-mu)* gamma*I - mu*gamma*Id
dRdt = (1-mu) * gamma*I
dDdt = mu*gamma*Id
return np.array([dSdt, dEdt, dIdt, dRdt, dDdt])
# every parameters in Model, will be estimated :
def Model(days, N, beta0, beta1, beta2, beta3, mu, E0, delta,tau, I0):
def beta(t):
xx=np.linspace(0,63,64)
# if t <0:
# return b[0]
t = int(t)
for x in xx[:16]:
if t < x:
return beta0
for x in xx[16:22]:
if t < x:
return beta1
for x in xx[22:52]:
if t < x:
return beta2
for x in xx[52:64]:
if t <= x or t > 63:
return beta3
print(xx[52:64])
print(t)
g = lambda t: np.array([S0, E0 , I0 *np.exp((-np.log(0.1)/tau )*(t-t0)), R0, D0])
tt = np.linspace(0, days - 1, days, dtype=int)
yy = ddeint(SEIRD, g, tt, fargs=( N, beta, kappa, gamma, mu, tau))
S, E, I, R, D = yy.T
return S, E, I, R, D
loss = []
def Germany(pars, x, data=None):
parvals = pars.valuesdict()
mu = parvals['mu']
E0 = parvals['E0']
delta = parvals['delta']
beta0 = parvals['beta0']
beta1 = parvals['beta1']
beta2 = parvals['beta2']
beta3 = parvals['beta3']
tau = parvals['tau']
I0 = parvals['I0']
# will return S, E, I,R,D
ret = Model(days, N, beta0, beta1, beta2, beta3, mu, E0, delta,tau, I0)
# Objective function:
u_vector = np.array([beta0, beta1, beta2, beta3 , delta, mu, E0, R0, I0, tau])
part_1 = integrate.trapz((delta*(ret[3] + ret[4]) + ret[5] - data[:, 0])**2, t)
part_2 = integrate.trapz((ret[5] - data[:, 2])**2, t)
part_3 = np.linalg.norm(u_vector, ord = 2)**2
norm_1 = integrate.trapz(data[:,0]**2, t)
norm_2 = integrate.trapz(data[:,2]**2, t)
# final objective function
model2=part_1/norm_1+ part_2/norm_2+ part_3* (u_weight)
loss.append(model2)
return model2
x_data = np.linspace(0, days - 1, days, dtype=int)
gamma = 1/10
kappa = 1/3
N = 83019213
delta = 0.3
R_0_start = 3
I0 = 114/delta
R0 = 16/delta
D0 = 0
E0 = (R_0_start/2) * (I0)
t = np.linspace(0, days - 1, days)
t0 = 0
S0 = N - E0 - I0 - R0 - D0
methods = ['least_squares','nelder']
R_0_list = [3, 4, 5]
u_weights = [0, 1e-9, 1e-8, 1e-7]
#tau_list = [11.5]
test_number = 0
for method in methods[0:1]:
for R_0_start in R_0_list[0:]:
for u_weight in u_weights[0:]:
for tau in tau_list[0:1]:
loss = []
# initiates the hyper parameters
gamma = 1/10
kappa = 1/3
mu = 0.02
N = 83019213
delta = 0.3
I0 = 114/delta
R0 = 16/delta
D0 = 0
E0 = (R_0_start/2) * (I0)
S0 = N - E0 - I0 - R0 - D0
test_number+=1
print('test_number', test_number, method, R_0_start, u_weight)
fit_params = Parameters()
fit_params.add('mu', value=0.02, min=0.01, max=0.05)
fit_params.add('E0', value=280, min=130)
fit_params.add('delta', value=0.3, min=0.05, max=0.5)
fit_params.add('I0', value=176, min=114)
fit_params.add('beta0', value=0.6, min=0.5)
fit_params.add('beta1', value=0.4, max = 0.5)
fit_params.add('beta2', value=0.2, min=0.1)
fit_params.add('beta3', value=0.1, min=0.05, max= 0.1)
fit_params.add('tau', value=7, min=6 , max = 11.5)
# minimization from lmfit, here I want to use some other methods, for example, 'nelder' or 'least_squares' . results with 'nelder' method is not perfect but at least meaningful, but with 'least_squares' totally wrong.
results = lmfit.minimize(Germany, fit_params, method= method,args=(x_data,), kws={'data': data}, nan_policy='omit')
# use the estimated parameters to re-run the code and see the results
mu = results.params['mu'].value
E0 = results.params['E0'].value
delta = results.params['delta'].value
beta0 = results.params['beta0'].value
beta1 = results.params['beta1'].value
beta2 = results.params['beta2'].value
beta3 = results.params['beta3'].value
tau = results.params['tau'].value
I0 = results.params['I0'].value
R0 = 16/delta
u_vector = np.array([beta0, beta1, beta2, beta3 , delta, mu, E0, R0, tau,I0])
part_3 = np.linalg.norm(u_vector, ord = 2)**2
Regularization = part_3*u_weight
sol = Model(len(x_data), 83019213, beta0, beta1, beta2, beta3, mu, E0, delta,tau, I0)
then plot the results based on parameter estimation for confirmed cases (I) and death cases (D), loss function, and betas.
fig, ax = plt.subplots(figsize = fig_size)
date = pd.date_range(start="2020-03-01",end="2020-05-03")
x = np.arange(0, len(date), 1)
ax.semilogy(date, df_i, 'ro', mfc = 'None', label='Real Confirmed')
ax.plot(date,(sol[2] + sol[3])*delta + sol[4],'r', label='Confirmed')
ax.grid(True, which='both', ls="-", axis = 'y')
ax.grid(which = 'major', alpha = 0.3, color = 'black')
ax.grid(which = 'minor', alpha = 0.4)
plt.legend(loc='best')
plt.ylabel('Cumulated Cases')
plt.xlabel(x_label,fontsize='14')
locator = mdates.AutoDateLocator(minticks=6, maxticks=12, interval_multiples=True)
formatter = mdates.ConciseDateFormatter(locator)
ax.xaxis.set_major_locator(locator)
ax.xaxis.set_major_formatter(formatter)
here even tau is a parameter to be estimated as well, and I will share the link of data as well
Okay, in that case things become a bit more complicated, and you have to do something like this. Note the parameter max_τ
which should not exceeded by the values your optimiser chooses for τ
.
from jitcdde import jitcdde,y,t
from jitcxde_common import conditional
from chspy import CubicHermiteSpline
from symengine import symbols,Rational
import numpy as np
max_τ = 20
κ = Rational(1,3)
γ = Rational(1,10)
t0 = 0
N = 80000000
E0 = 280
I0 = 114
R0 = 16
D0 = 0
S0 = N - E0 - I0 - R0 - D0
days = 64
times = np.arange(t0,t0+days+1,1)
control_pars = [τ,β0,β1,β2,β3,μ,E0,δ] = symbols("beta0 beta1 beta2 beta3 mu E0 delta tau")
S,E,I,R,D = [y(i) for i in range(5)]
labels = ["S","E","I","R","D"]
Id = y(2,t-τ)
# Somewhat ugly, but gets hardcoded and can be easily adjusted using control parameters:
width = 0.5 # half a day
β = conditional(t,22,
conditional(t,52,β3,β2,width),
conditional(t,16,β1,β0,width),
width
)
SEIRD = {
S: -β*S*I/N ,
E: β*S*I/N - κ*E ,
I: κ*E - (1-μ)*γ*I - μ*γ*Id,
R: (1-μ)*γ*I ,
D: μ*γ*Id,
}
# Defining the initial past as a spline for easy re-use:
# By defining the DDE outside of run_model we need to compile only once:
DDE = jitcdde( SEIRD, control_pars=control_pars, max_delay=max_τ, verbose=False )
def run_model(tau,*par_values):
DDE.purge_past()
DDE.past_from_function(
[ S0, E0, I0*10**((t-t0)/tau), R0, D0 ],
times_of_interest = [t0-tau,t0],
max_anchors=10, tol=3
)
DDE.set_parameters(tau,*par_values)
DDE.adjust_diff()
return np.vstack([ DDE.integrate(time) for time in times ])
solutions = run_model( 11.5, 0.6, 0.5, 0.3, 0.1, 0.02, 280, 1 )
from matplotlib.pyplot import subplots
fig,axes = subplots()
for label,solution in zip(labels,solutions.T):
axes.plot(times,solution,label=label)
axes.legend()
axes.set_yscale("log")
fig.savefig("results.pdf")
I honestly do not want to delve into that optimisation code here. The above gives a function to simulate the DDE, which should suffice for your optimising needs. If you have any questions or wishes regarding this (like understanding some parts of my code or making τ a parameter), I am happy to help, but anything going beyond this really is your job.
May I know how can I call I,R,D to use in my objective function. and not directly using solutions = run_model( 11.5, 0.6, 0.5, 0.3, 0.1, 0.02, 280, 1 ). in my code after dde, I have 'return S, E, I, R, D' , and then I will use I,R,D in my objective function which will be minimized and parameters based on initial guess and upper-lower bound will be estimated, so Now want to learn how can I pick the I, R, D from your code and use in other lines
and why here Id = y(2,t-τ) we have 2
May I know how can I call I,R,D to use in my objective function. and not directly using solutions = run_model( 11.5, 0.6, 0.5, 0.3, 0.1, 0.02, 280, 1 ). in my code after dde, I have 'return S, E, I, R, D' , and then I will use I,R,D in my objective function which will be minimized and parameters based on initial guess and upper-lower bound will be estimated, so Now want to learn how can I pick the I, R, D from your code and use in other lines
The output of run_model
, i.e., solutions
in the example code is a NumPy array which has the time series of S, E, I, R, and D as columns (in that order). So, e.g., solutions[:,2]
gives you the time series for I.
why here Id = y(2,t-τ) we have 2
Because I
is y(2)
and thus the delayed I
is y(2,t-τ)
. In general, any component has an optional second argument for the time. Mind that all this part is doing is renaming the dynamical variables for convenience.
THANKS a lot, and one another question, I checked the documentation of JiTCDDE, but I can not find how 'conditional' is working and it is confusing now for me. because I was working on 0<=t < 16 --> beta0 , 16<= t < 22 ---> beta2, 22<= t<52 --- > beta3, 52<= t <= 64 ---> beta3.
but I can not find how 'conditional' is working
It’s from jitcxde_common
(as the import
statement tells you). It’s documented here.
---> 77 part_1 = integrate.trapz((δ*(solutions[:,2] + solutions[:, 3]) + solutions[:, 4] - data[:, 0])2, t) 78 part_2 = integrate.trapz((solutions[:, 4] - data[:, 2])2, t) 79
<__array_function__ internals> in trapz(*args, **kwargs) ~\Anaconda3\lib\site-packages\numpy\lib\function_base.py in trapz(y, x, dx, axis) 4053 d = d.reshape(shape) 4054 else: -> 4055 d = diff(x, axis=axis) 4056 nd = y.ndim 4057 slice1 = [slice(None)]*nd <__array_function__ internals> in diff(*args, **kwargs) ~\Anaconda3\lib\site-packages\numpy\lib\function_base.py in diff(a, n, axis, prepend, append) 1233 nd = a.ndim 1234 if nd == 0: -> 1235 raise ValueError("diff requires input that is at least one dimensional") 1236 axis = normalize_axis_index(axis, nd) 1237 ValueError: diff requires input that is at least one dimensionalThis happens in a line (and part) that is not related to my software, in a function that is not related to my software. Also, the error message is pretty clear. You have to solve this yourself.
sorry for previous naive question, I add this part and now it is working, days= 64 t = np.arange(t0,t0+days,1) part_1 = integrate.trapz((δ*(solutions[:,2] + solutions[:, 3]) + solutions[:, 4] - data[:, 0])2, t) part_2 = integrate.trapz((solutions[:, 4] - data[:, 2])2, t)
code is running but I have this warning message during it : C:\Users\Nik\Anaconda3\lib\site-packages\jitcdde_jitcdde.py:768: UserWarning: The target time is smaller than the current time. No integration step will happen. The returned state will be extrapolated from the interpolating Hermite polynomial for the last integration step. You may see this because you try to integrate backwards in time, in which case you did something wrong. You may see this just because your sampling step is small, in which case there is no need to worry. warn("The target time is smaller than the current time. No integration step will happen. The returned state will be extrapolated from the interpolating Hermite polynomial for the last integration step. You may see this because you try to integrate backwards in time, in which case you did something wrong. You may see this just because your sampling step is small, in which case there is no need to worry.")
It means exactly this: You should check that you are not integrating backwards (which you do not appear to do). If you don’t, don’t worry.
When quickly skimming your code I also noted that you seem to use two different δs. That’s a bad idea and almost certainly not what you want. You can only use the values of control parameters within run_model
.
regarding the δs, as my I0 and R0 and E0 are dependent on δ, in the initial points I have to write : I0 = 114/δ , or E0 = (R_0_start / 2) * I0 . and when I run the code, I have error and it asks me to define δ as well, otherwise for me δ is a parameter to be estimated. and I don't want to use different δs.
Well, you can compute I0
and E0
within run_model
using the value of δ passed as an argument. You only have to separate it from par_values
and put it an an appropriate position in the arguments. Or maybe it’s easier for your to use explicit arguments for the parameter values all the time.
but they are initial conditions too, so I have to use them there too, and another question, you wrote the historical function for different from mine, yours : I0*10*((t-t0)/tau) mine : I0 np.exp((-np.log(0.1)/tau )*(t-t0))
and when I change it, it is not working.
but they are initial conditions too, so I have to use them there too
What is there? You only really use I0
and E0
inside run_model
and even if not, you can always move things there (you just want to avoid it as much as possible).
and another question, you wrote the historical function for different from mine, yours : I0*10**((t-t0)/tau) mine : I0 np.exp((-np.log(0.1)/tau )(t-t0))
Those should be equivalent, If I was not mistaken. Anyway, CHSPy’s from_function
takes a symbolic input (at least the way I use it), so you have to use sympy.exp
and sympy.log
instead of their Numpy analogues. (SymEngine works as well.)
I don’t want to avoid it, I don’t know how should I put them inside run_model. I told you at start and several times that I need help, and please help me regarding this issue. I am week in it. I don’t know how I have to put them as argument in run_model. I tried, and it is not running because I tried several ways and All were wrong. You see in all other parts I am asking and then spending time to implement your comments but when I can not do it, doesn’t mean I avoid it.
it is very complicated for me to change anything inside the run_model, I have S,E,I,R,D and then I need initial points for each of them, I0 = 114/δ R0 = 16/δ D0 = 0 E = (R_0_start/2) * I0 S0 = N - E0 - I0 - R0 - D0 which I0 will be multiplied by historical function too. in this minimization, I have to minimize δ, β0,β1,β2,β3,μ, E0, I0, τ now I don't know how I can introduce them and at the same time putting them in the run_model, every change for me caused an error. if I put I0 and E0 and δ, I don't know how I have to implement R0 (dependent to δ) and S0 (equal to the combination of all of them)
When I wrote “you just want to avoid it as much as possible”, I wasn’t trying to divine your actual desires. Instead, I wanted to describe what is a generally reasonable approach to designing run_model
, namely, to make it as lightweight as possible to keep your optimisation routine efficient.
In the following I rewrote run_model
to use δ, β₀, β₁, β₂, β₃, μ, and τ as arguments (not in that order). As E₀, R₀, and I₀ depend on δ, I did not make them arguments, but I hope it’s really clear how to change this now. Note that I generally spell out Greek letters (e.g., delta
) when referring to the values of the respective parameters and use the actual Greek letters (e.g., δ
) as symbols to represent them when setting up the DDE for integration.
from jitcdde import jitcdde,y,t
from jitcxde_common import conditional
from chspy import CubicHermiteSpline
from symengine import symbols,Rational
import numpy as np
max_τ = 20
κ = Rational(1,3)
γ = Rational(1,10)
t0 = 0
N = 80000000
days = 64
times = np.arange(t0,t0+days+1,1)
control_pars = [τ,β0,β1,β2,β3,μ] = symbols("beta0 beta1 beta2 beta3 mu tau")
S,E,I,R,D = [y(i) for i in range(5)]
labels = ["S","E","I","R","D"]
Id = y(2,t-τ)
# Somewhat ugly, but gets hardcoded and can be easily adjusted using control parameters:
width = 0.5 # half a day
β = conditional(t,22,
conditional(t,52,β3,β2,width),
conditional(t,16,β1,β0,width),
width
)
SEIRD = {
S: -β*S*I/N ,
E: β*S*I/N - κ*E ,
I: κ*E - (1-μ)*γ*I - μ*γ*Id,
R: (1-μ)*γ*I ,
D: μ*γ*Id,
}
# Defining the initial past as a spline for easy re-use:
# By defining the DDE outside of run_model we need to compile only once:
DDE = jitcdde( SEIRD, control_pars=control_pars, max_delay=max_τ, verbose=False )
def run_model(tau,beta0,beta1,beta2,beta3,mu,delta):
I0 = 114/delta
R0 = 16/delta
D0 = 0
E0 = R0/2*I0
S0 = N - E0 - I0 - R0 - D0
DDE.purge_past()
DDE.past_from_function(
[ S0, E0, I0*10**((t-t0)/tau), R0, D0 ],
times_of_interest = [t0-tau,t0],
max_anchors=10, tol=3
)
DDE.set_parameters(tau,beta0,beta1,beta2,beta3,mu)
DDE.adjust_diff()
return np.vstack([ DDE.integrate(time) for time in times ])
# Using keyword arguments here for illustration. You do not need to do this. If you order things right, you can probably just unpack `pars`.
solutions = run_model(
tau = 11.5,
beta0 = 0.6, beta1=0.5, beta2=0.3, beta3=0.1,
mu = 0.02,
delta = 1
)
from matplotlib.pyplot import subplots
fig,axes = subplots()
for label,solution in zip(labels,solutions.T):
axes.plot(times,solution,label=label)
axes.legend()
axes.set_yscale("log")
fig.savefig("results.pdf")
Thanks a lot for your effort and help, it is very helpful and I am trying to implement it in my minimization :)
Hi,
I want to solve my delay differential equation system, I used ddeint, but the results are not good at all, I want to minimize my objective function based on the results of Model. I have one delay part in 'Id' :
I am a mathematician and working on my thesis, and I am forced to solve this part, but my programing language is not good enough. It will be great if you help me to solve it. thanks in advance
Best regards, Nik