Closed aciniduva closed 3 years ago
Could you show a code example and sample data to make the problem clearer?
Hi! I am trying to use your package Hawkes to build graphs of counting process and conditional intensity. But I have encountered the problem as I do not understand how to use correctly Times of events (np.array). I am writing the following code and receive the wrong graph for N(0,t). What am I doing wrong? P.S. Sorry, I do not know Python very well and this process is new to me. I appreciate your help! import Hawkes as hk
para = {"mu":0.2, "alpha":0.8, "beta":0.5} itv = [0,2000] Times = np.array(df['days']) model = hk.simulator().set_kernel('exp').set_baseline('const').set_parameter(para) model_simulation.plot_N()
And I also encountered the problem while trying to estimate parameters. The code is below. I receive the error: 'simulator' object has no attribute 'fit'. Could you, please, explain how to fix it?
import Hawkes as hk itv = [0,1000] # the observation interval T = np.array(df['days']) model.fit(T,itv) # T is the event times (numpy.ndarray) print("parameter:",model.parameter) # the estimated parameter values print("branching ratio:",model.br) # the branching ratio print("log-likelihood:",model.L) # the log-likelihood of the estimated parameter values print("AIC:",model.AIC) # the AIC of the estimated parameter values
For your purpose, you need to use hk.estimator()
rather than hk.simulator()
.
The following is a code example to generate figures.
import numpy as np
import Hawkes as hk
para = {"mu":0.2, "alpha":0.8, "beta":0.5}
itv = [0,10]
T = np.array([1,2,3,4,5,6,7,8,9])
model = hk.estimator().set_kernel('exp').set_baseline('const').set_parameter(para).set_data({'T':T}, itv)
model.plot_N() # the figure of N(0,T)
model.plot_l() # the figure of time vs conditional intensity function
The following is a code example for inference.
import Hawkes as hk
import numpy as np
itv = [0,1000] # the observation interval
T = np.arange(1,1000, dtype=np.float) # sample data
model = hk.estimator().set_kernel('exp').set_baseline('const')
model.fit(T,itv) # T is the event times (numpy.ndarray)
print("parameter:",model.parameter) # the estimated parameter values
print("branching ratio:",model.br) # the branching ratio
print("log-likelihood:",model.L) # the log-likelihood of the estimated parameter values
print("AIC:",model.AIC) # the AIC of the estimated parameter values
Thanks a lot for your help! It works right now!
For your purpose, you need to use
hk.estimator()
rather thanhk.simulator()
.The following is a code example to generate figures.
import numpy as np import Hawkes as hk para = {"mu":0.2, "alpha":0.8, "beta":0.5} itv = [0,10] T = np.array([1,2,3,4,5,6,7,8,9]) model = hk.estimator().set_kernel('exp').set_baseline('const').set_parameter(para).set_data({'T':T}, itv) model.plot_N() # the figure of N(0,T) model.plot_l() # the figure of time vs conditional intensity function
The following is a code example for inference.
import Hawkes as hk import numpy as np itv = [0,1000] # the observation interval T = np.arange(1,1000, dtype=np.float) # sample data model = hk.estimator().set_kernel('exp').set_baseline('const') model.fit(T,itv) # T is the event times (numpy.ndarray) print("parameter:",model.parameter) # the estimated parameter values print("branching ratio:",model.br) # the branching ratio print("log-likelihood:",model.L) # the log-likelihood of the estimated parameter values print("AIC:",model.AIC) # the AIC of the estimated parameter values
I have one more question regarding the use of package. Is it possible to use this package for modelling the bivariate Hawkes process? I have to model not only self-excitement of the process, but also cross-excitement like in this article on page 21: http://arno.uvt.nl/show.cgi?fid=151308 Thanks in advance!
Hi, more than an issues this is rather a suggestion. I'm trying minimize the loglikelihood function for a Hawkes process with kernel that is the sum of P exponentials. For work reasons I wrote the loss function by myself and, for each point (\mu, \alpha1, \alpha2, \beta1, \beta2) both the function value and its gradient coincide with yours. However, by using normal strategies from scipy.minimize, most of the time I find point different from the one of your estimator, especially when there are several order of magnitude of difference between the betas. So I wanted to ask whether you use multiple guess as a starting point or what, thank you for the attention