rlabbe / Kalman-and-Bayesian-Filters-in-Python

Kalman Filter book using Jupyter Notebook. Focuses on building intuition and experience, not formal proofs. Includes Kalman filters,extended Kalman filters, unscented Kalman filters, particle filters, and more. All exercises include solutions.
Other
16.45k stars 4.16k forks source link

UKF_predict_arguments #258

Closed marcomassano closed 4 years ago

marcomassano commented 5 years ago

Hi, I'm trying to learn how to use UKF with filetrpy libraries. I've got some problems with the arguments of predict function, when the function has many inputs. I've also tryed to run your experiment example 'ukfloc2', but, when I run the code, it returns: TypeError: fx() got an unexpected keyword argument 'fx_args'

Your fx function has 3 arguments (x, dt, u). I've seen that when the fx function contains only 2 arguments, when I run the filter I have just to call 'predict()', instead when it has more than 2 arguments, the remaining arguments have to becalled during the predict step. It is right? It is just a number-of-arguments issue?

Anyway, I'm not able tu run: ukf.predict(fx_args=u)

Can you please help me in understanding why?

The problem I'm trying to solve with the Kalman Filter is an RC circuit, and the main part of the script is:

def state (x,Sun,Text):
    print x,b
    xx = np.zeros(6)
    xx[0] = -x[0]*x[2] + x[1]*x[2] + x[5]*Sun
    xx[1] = -x[1]*x[4] - x[1]*x[3] + x[0]*x[4] + Text*x[3]
    xx[2] = 0
    xx[3] = 0
    xx[4] = 0
    xx[5] = 0
    return xx

def hx(x):
    return x[:1]

def fx(x,dt,Sun,Text):
    xx = x + dt * state(x,Sun,Text)
    return xx

n = 6
sigmas = MerweScaledSigmaPoints(n, alpha=.1, beta=2., kappa=-3.)
plant =  UnscentedKalmanFilter(dim_x=n, dim_z=1, fx=fx, hx=hx, dt=1., points=sigmas)
plant.x =np.array([Tin0, Tw0, 1/(Ra*Ca), 1/(Rw*Cw), 1/(Ra*Cw), Aw/Ca]).T
plant.P = np.diag([0.01, 0.01, 1, 1, 1, 1])
plant.R = np.array(.5)
plant.Q = np.diag([100, 100, 10, 10, 10, 10])

xs, xP, res = [], [], []
for i,z in enumerate(Tin):

     plant.predict(Sun=Sun[i],Text=Text[i])

    #plant.update(z)

    xs.append(plant.x.copy())
    xP.append(plant.P.copy())
    res.append(plant.y.copy())

Is that the right way to call the arguments of fx function? If I do like that the script doesn't crash, but the result of my state variable x diverges I'm running only the predict step on purpose, because I would like to control if the x profile matches the analytical result, but it does not. I'm trying to understand if my diverging problem depends on the structure of the ukf (P_matrix, sigma points, alfa/beta/kappa parameters..) or on the calling of my fx function.

I would be very greatful if someone could help me.

rlabbe commented 5 years ago

I'm on vacation with minimal internet access, it will be at least a week before I can address this. For now, note that FX takes two arguments only. Try using a global for your extra variables. I think there's a way to call with more variables but I just don't remember right now.

On Fri, Oct 26, 2018, 09:33 marcomassano notifications@github.com wrote:

Hi, I'm trying to learn how to use UKF with filetrpy libraries. I've got some problems with the arguments of predict function, when the function has many inputs. I've also tryed to run your experiment example 'ukfloc2', but, when I run the code, it returns: TypeError: fx() got an unexpected keyword argument 'fx_args'

Your fx function has 3 arguments (x, dt, u). I've seen that when the fx function contains only 2 arguments, when I run the filter I have just to call 'predict()', instead when it has more than 2 arguments, the remaining arguments have to becalled during the predict step. It is right? It is just a number-of-arguments issue?

Anyway, I'm not able tu run: ukf.predict(fx_args=u)

Can you please help me in understanding why?

The problem I'm trying to solve with the Kalman Filter is an RC circuit, and the main part of the script is:

def state (x,Sun,Text): print x,b xx = np.zeros(6) xx[0] = -x[0]x[2] + x[1]x[2] + x[5]Sun xx[1] = -x[1]x[4] - x[1]x[3] + x[0]x[4] + Text*x[3] xx[2] = 0 xx[3] = 0 xx[4] = 0 xx[5] = 0 return xx

def hx(x): return x[:1]

def fx(x,dt,Sun,Text): xx = x + dt * state(x,Sun,Text) return xx

n = 6 sigmas = MerweScaledSigmaPoints(n, alpha=.1, beta=2., kappa=-3.) plant = UnscentedKalmanFilter(dim_x=n, dim_z=1, fx=fx, hx=hx, dt=1., points=sigmas) plant.x =np.array([Tin0, Tw0, 1/(RaCa), 1/(RwCw), 1/(Ra*Cw), Aw/Ca]).T plant.P = np.diag([0.01, 0.01, 1, 1, 1, 1]) plant.R = np.array(.5) plant.Q = np.diag([100, 100, 10, 10, 10, 10])

xs, xP, res = [], [], [] for i,z in enumerate(Tin):

 plant.predict(Sun=Sun[i],Text=Text[i])

#plant.update(z)

xs.append(plant.x.copy())
xP.append(plant.P.copy())
res.append(plant.y.copy())

Is that the right way to call the arguments of fx function? If I do like that the script doesn't crash, but the result of my state variable x diverges I'm running only the predict step on purpose, because I would like to control if the x profile matches the analytical result, but it does not. I'm trying to understand if my diverging problem depends on the structure of the ukf (P_matrix, sigma points, alfa/beta/kappa parameters..) or on the calling of my fx function.

I would be very greatful if someone could help me.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/issues/258, or mute the thread https://github.com/notifications/unsubscribe-auth/AFD-787Uwy2fe4tLZMhjsAr5ngmBy9tvks5uoytQgaJpZM4X8eRE .

rlabbe commented 5 years ago

I looked at this, it is impossible to debug without knowing your constants and data. I set them to arbitrary values, and changed the print statement in state() to print sun and text, and confirmed that the values in Sun and Text are being passed in correctly; you can do the same. I'd add additional print statements to debug the entire function; in particular, look at the spread of the sigma points (x) to see if the sample region is too big. For example, for my entirely arbitrary initial values and constants I got this for the first predict call for x:

[0.1 0.2 3.3333 0.5 5. 0.3333] [0.1173 0.2 3.3333 0.5 5. 0.3333] [0.1 0.2173 3.3333 0.5 5. 0.3333] [0.1 0.2 3.5065 0.5 5. 0.3333] [0.1 0.2 3.3333 0.6732 5. 0.3333] [0.1 0.2 3.3333 0.5 5.1732 0.3333] [0.1 0.2 3.3333 0.5 5. 0.5065] [0.0827 0.2 3.3333 0.5 5. 0.3333] [0.1 0.1827 3.3333 0.5 5. 0.3333] [0.1 0.2 3.1601 0.5 5. 0.3333] [0.1 0.2 3.3333 0.3268 5. 0.3333] [0.1 0.2 3.3333 0.5 4.8268 0.3333] [0.1 0.2 3.3333 0.5 5. 0.1601]

I have no idea if that sampling is reasonable or not, but the different between subsequent values of x (0.1->0.1173, etc) is all controlled by alpha, beta, kappa.