psmoveservice / PSMoveService

A background service that communicates with the psmove and stores pose and button data.
Apache License 2.0
593 stars 146 forks source link

Sensor Fusion between IMU and optical tracker #13

Open cboulay opened 8 years ago

cboulay commented 8 years ago

I'm just dropping a couple links to papers here for now.

1

2

HipsterSloth commented 8 years ago

We should port over the positional Kalman filter from psmoveapi and use the acceleration (from the IMU) to help drive the prediction of the controller position. The current Kalman filter makes no attempt to predict, and while it does smooth out the tracking, it makes the controller feels laggy.

cboulay commented 8 years ago

@HipsterSloth wrote this elsewhere:

Right now we are using a simple LowPass filter on the position data we get from the tracker. On psmoveapi I made a simple Kalman filter for smoothing, but it didn't leverage the accelerometer so it felt laggy:

https://github.com/HipsterSloth/psmoveapi/blob/master/src/tracker/psmove_kalman_filter.cpp

We already have a framework in place for filtering here:

https://github.com/cboulay/PSMoveService/blob/master/src/psmoveservice/Filter/PositionFilter.h

We need to create a new kalman filter that plugs into our filter framework and leverages the other sensor data to make a better prediction of position.

Other helpful links: Pictorial of kalman filters- http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/ Tutorial of extended kalman filters (with TinyEKF implementation)- http://home.wlu.edu/~levys/kalman_tutorial/

cboulay commented 8 years ago

Shower thoughts:

  1. Kalman filter used to estimate state, where state includes position, velocity, orientation.
    • Do we need linear acceleration?
    • Do we need angular velocity/acceleration?
  2. Filter only uses IMU sensor data (i.e. "dead reckoning")
  3. Maintain a buffer of state "snapshots", where each snapshot contains everything needed to update the filter.
  4. Whenever a "good" position estimate comes in from the optical tracker, go back in time to the snapshot whose IMU sensor time coincided with the frame time, and overwrite its position with the new position.
    • We need a system to quickly evaluate quality of position estimates. I'll create a new issue for this even though we shouldn't implement it yet.
    • We need to know sensor-to-orientation delays for the IMU sensor-to-position delays for the camera.
  5. Iterate through updating the remaining following snapshots using the preserved sensor measurements.
HipsterSloth commented 8 years ago

Kalman filter used to estimate state, where state includes position, velocity, orientation. Do we need linear acceleration? Do we need angular velocity/acceleration?

I think the linear acceleration can be fed into the control model matrix/control vector (i.e. the B*u part of the state update eqn: x_(k|k-1) = F*x_(k-1|k-1) + B*u) since we have the sensor inputs for them via the accelerometer. We do need to subtract out gravity from the accelerometer reading though. Since we have the orientation from the orientation filter it should be possible to put the gravity vector into the sensor's frame of reference and subtract out's it's effect.

It we want roll the orientation filtering into the the Kalman filter as well we could feed the gyroscope sensor data into the control vector (angular velocity) and the estimated orientation (derived from the magnetometer-accelerometer alignment with reference vectors, see usage of eigen_alignment_quaternion_between_vector_frames. The control model matrix 'B' would map the gyroscope measurement in an orientation delta as follows:

cboulay commented 8 years ago

Next morning thoughts:

Using my method above, we'd probably still get optical tracking jitter, especially in the Z-direction. e.g., if I'm moving the controller in the +X direction and the last optical tracker update had me at X=20, Z=40, the predicted position would have me at X=24,Z=40. If the next optical tracker update puts me at X=23,Z=32, the next predicted position would have me at X=27,Z=32... with a 8-cm jitter in Z-direction.

So here are some other ideas on how to handle the multi-rate:

cboulay commented 8 years ago

Some more resources:

OpenCV UKF docs and example usage. Sensor fusion UKF using Eigen with Python wrapper

HipsterSloth commented 8 years ago

That cukf example looks pretty close to what we want. It has some extra state in there (looks like this is used in a flight system controller or something) but it has the position, velocity, orientation and MARG state all in one filter: https://github.com/sfwa/ukf/blob/master/examples/sfwa_ukf/cukf.cpp

cboulay commented 8 years ago

See section 3.4

Though they don't use a magnetometer, so I have to come up with a new measurement model (i.e., what are the expected magnetometer values when you 'know' the state).

Also, they only use a first order process model for orientation; they don't keep angular acceleration in their state vector and assume angular velocity won't change much frame-to-frame. @HipsterSloth , does the openvr_driver want angular acceleration? If so then I'll make this second order maybe using the simple second order process model for angular velocity and acceleration from the sfwa/ukf repo above.

If we don't need it, there's an advantage to reducing the length of the state vector not only because it simplifies the process model but other parts of the predict step get simpler too.

HipsterSloth commented 8 years ago

@cboulay SteamVR does take an angular velocity and an angular acceleration BUT I'm computing the angular acceleration just using a simple derivative (w1-w0/dt) calculation on the angular velocity angle-axis vector:

https://github.com/cboulay/PSMoveService/blob/master/src/psmoveservice/Filter/OrientationFilter.cpp#L557

Point being I don't think you have to make a second order model. I think just keeping the angular velocity and then deriving the angular acceleration after the filter runs will be fine (since it seems to work great now on a much simpler filter).

As for the magnetometer measurement it's just a unit vector (the B-field direction).

I think it might be easiest to consider magnetometer in conjunction with the accelerometer to get an orientation using the eigen_alignment_quaternion_between_vector_frames. That function takes the current magnetometer and accelerometer readings and computes a quaternion that best aligns the reference magnetometer and accelerometer readings computed during magnetometer calibration (i.e. the controller orientation). We could treat the output of this function as a virtual "orientation sensor". I could even compute the base amount of variance we get for this virtual sensor in the magnetometer calibration.

The angular velocity in the model can come straight from the gyroscope.

If we wanted to get a bit fancier we could scale the variance of the "virtual orientation" sensor with the acceleration magnitude (since the accelerometer is a blend of gravity plus the motion of the controller).

cboulay commented 8 years ago

With a UKF, we have to describe how a state will lead to a measurement. This is the observation model. The relationship between sensors and states is already captured in the observation model. If we take our sensors to generate a virtual sensor that is also a state then I'm worried we'll create some recursive problems. But maybe this isn't really a problem; it's not terribly different to what we're doing. But the other argument against it is that I haven't seen any other papers that do this and I eventually hope to publish this. To do something 'non-standard', I'd have to prove that it is superior.

BTW, the observation model for swfa.

# Process model - they use derivatives here, multiplied by deltaT later, presumably add noise later...
#delta_position not useful in our case
delta_linear_velocity = orientation.conjugate * linear_acceleration
delta_linear_acceleration = 0 #no change
delta_orientation = [0 0.5*angular_velocity].conjugate * orientation
delta_angular_velocity = angular_acceleration
delta_angular_acceleration = 0
delta_gyro_bias = 0

# Observation model
optical_position = position
accelerometer = linear_acceleration + orientation * gravity
gyroscope = angular_velocity + gyroscope_bias
magnetometer = orientation * local_magnetic_field

The magnetometer calibration gives us the local magnetic field (correct?). Do we have gyroscrope bias saved anywhere?

For completeness, here are the observation model and process model for the thesis I linked above:

#Process model - deltaT applied in-line
position = position + dT*linear_velocity + 0.5*dt*dt*linear_acceleration + positional_noise
linear_velocity = linear_velocity + dt*linear_acceleration + velocity_noise
linear_acceleration = linear_acceleration + acceleration_noise
rot_angle = dT*mag(angular_velocity)
rot_axis = angular_velocity / mag(angular_velocity)
orientation = orientation * orientation_noise * [cos(rot_angle/2), rot_axis*sin(rot_angle/2)]
angular_velocity = angular_velocity + angular_velocity_noise # 1st order, no acceleration

#Measurement model
optical_position = position + optical_velocity # We don't measure optical velocity
gyroscope = (orientation + v_gyro).transform() # v_gyro undefined, maybe bias?
accelerometer = (linear_acceleration + gravity + v_accel).transform() #v_accel?
#transform() is an operation that transforms from static reference frame to dynamic controller reference frame
#they don't use magnetometers
HipsterSloth commented 8 years ago

I don't think there will be any recursive problems because the current state of the "virtual orientation" sensor isn't derived from any previous filter state. It's effectively a stateless transform of sensor data from that frame. That said, I can't defend that approach is fundamentally better then incorporating the magnetometer reading directly into the filter. And honestly it's whatever approach you think is best. I only mentioned the other way since I thought it might make the construction of the filter easier.

EDIT: And actually another argument against my suggestion is computational cost. Running eigen_alignment_quaternion_between_vector_frames involves doing an iterative gradient descent. EDIT2: I re-read your comment again and realized that there may be a recursive problem with the using the virtual sensor in the state and the positional acceleration readings since that is a function or orientation.

The magnetometer calibration gives us the local magnetic field (correct?). Do we have gyroscrope bias saved anywhere?

Yeah the calibration gives the direction the magnetic field is pointing when the controller is in the "identity" pose. It's currently a unit vector, but you could transform that to a quaterntion by computing the rotation needed to rotate the vector <1, 0, 0> to the calibration direction. It looks like the swfa observation model treats the local magnetic field as a quaternion.

Do we have gyroscrope bias saved anywhere?

We don't presently. I did write a gyro error measurement tool back for psmoveapi:

https://github.com/HipsterSloth/psmoveapi/blob/psmove_unity5/examples/c/test_gyroscope_error.c

Either you could port that over to python just get the gyro bias and drift values as constants, or I could add that as one of the calibration steps that we can run.

HipsterSloth commented 8 years ago

Actually I think i'm just going to port the gyro calibration over this weekend. I need to write a more complicated gyro calibration tool for the dual shock 4 anyway so this will be a good precursor for that.

HipsterSloth commented 8 years ago

@cboulay FYI I'm currently debugging the gyro calibration tool now. I should have that wrapped up by the EOD tomorrow (ds4 branch). The gyro calibration tool computes the average noise (rad/s), variance of noise (rad/s^2), and the drift (rad/s). The noise measurements is computed by having the user place the controller on a level surface and then averaging the abs value of the gyro readings over a short period. Since the gyro readings should be zero, any measured value is noise. The drift I compute by summing the gyro readings over 60 seconds and then dividing by the 60 seconds to get the drift rate. These get saved to the controller config. I can get you values for the psmove if you need those for your kalman filtering work.

cboulay commented 8 years ago

Yes, getting gyro error will help a lot. I've been busy and haven't looked at this in a couple weeks. I hope to get back to it in the next couple of days.

cboulay commented 8 years ago

Apologies. I just rebased client_capi on msvc2015_64bit then did a force push.

HipsterSloth commented 8 years ago

@cboulay Oh no worries. I don't have any un-pushed work in client_capi. Actually on that note I was hoping to wrap up the ds4 work this weekend, pull-request it to master, and then pull to client_capi since there are a bunch of new additions to the client api for the ds4 controller. Nothing changed with the existing client api (just new stuff). But I wanted to check with you first before I slammed a bunch of new code into the branch you were doing the Kalman filter work in. Actually now that I type this I think should probably just have you look at my PR to master once that's ready and then you can let me know if that's something you want me to avoid bringing down into the CAPI branch.

HipsterSloth commented 8 years ago

@cboulay Sorry I got caught up in debugging a bunch of stuff with the ds4 tracking work and forgot about this. I have some PSMove gyro statistics for you. I'm measuring this two ways. The first was the gyro error tool I wrote a while ago for psmoveapi. I realized I had a mistake in there with how a measured error. I need to compute statistics on abs(gyro_sample), i.e. the gyro error, not the raw gyro sample. The mean over the a short period of gyro samples when the controller is at rest should be 0.

Here is the output from the test_gyroscope_error.c

Total samples: 1713
Total sampling time: 30.011591s
Total angular drift (rad): : <-2.227033, 0.123498, 0.694441>
Angular drift rate (rad/s): : <0.074206, 0.004115, 0.023139>
Mean time delta: 0.017520s
Mean angular velocity (rad/s): <0.042783, 0.003166, 0.014328>
Std. dev angular velocity (rad/s): <0.024825, 0.003242, 0.013953>
Angular velocity variance (rad/s/s): <0.000616, 0.000011, 0.000195>
[Madgwick Parameters]
[Beta] Max angular velocity variance (rad/s/s): 0.000616
[Zeta] Max drift rate (rad/s): 0.074206

I have a gyro calibration tool in DS4 branch of PSMoveService as well. It yields the following variance and drift that get saved into the controller config (units rad/s^2 and rad/s):

"Variance": "0.00021907287",
"Drift": "0.0743218213"

Comparing the two sets of results the drift rate is in close agreement, but the variance is around 3x higher is the psmoveapi tool. I've run both tools several times and the results they measure are pretty stable. I'm not sure what the difference is in the variance, but for now I would probably trust the value from the psmoveapi version of the tool.

EDIT: typos

HipsterSloth commented 8 years ago

@cboulay TL;DR - Mid next week I'd like to jump in and help out with the Kalman filter for the PSMove. That said I don't want to step on any toes as I don't know if there is any unchecked-in work you have or there was a piece you really wanted to do yourself. Here is my current plan the next week (let me know what you think):

1) Fix opencv external build issue in master branch and make a release 2) Pull request the DS4 work I've been doing up into master 3) Pull the latest from master down in CAPI 4) Update the CAPI to match the client changes I made for DS4 work 5) Read up on the UKF paper you sent 6) Jump in where appropriate with the Kalman filter

Details: In another issue I alluded to DS4 tracking and the Kalman filter. As you might imagine it's pretty easy to occlude the light bar on the DS4 (just by pitching or yawing the controller more than 45 degrees) . Simply cutting out the controller in this circumstance feels terrible (because it happens so easily). Even supporting the PS4 camera won't fix this problem (it just gets you better tracking range). As such I'm postponing the PS4 camera stuff at work in favor of a better DS4 tracking filter.

Fortunately there is a pretty good accelerometer and gyro in the DS4. With the right amount of filtering/damping you can actually get a reasonable facsimile of the controller motion with integrated IMU data for many seconds after you lose optical tracking. In the DS4 branch I've added a mode to the PositionFilter that computes the "jerk" from the accelerometer to get the linear acceleration. This get's integrated into acceleration -> velocity -> position. The amount of IMU integrated position we trust is blended against the "optical tracking quality" which is just a [0,1] value proportional to how many pixels of the light bar each tracker can see. I'm going to post a video this weekend showing what the current state of this is. I would classify the current result as "functional, but terrible".

All of the blending weights for this are totally arbitrary. I feel like I could make the hand off to IMU tracking and the general state management a lot more principled if it was implemented as a Kalman filter, with the appropriate gains for how much we trust the each of the sensor readings at the given moment.

EDIT: typos

cboulay commented 8 years ago

Sorry, things have been a bit crazy at work, and I'm sick and my baby's sick so it's a mess at home too.

Since the last push to this branch...

HipsterSloth commented 8 years ago

@cboulay Oh no worries man! I figured things got crazy. So sorry to hear about plague at home. Rest up!

BTW, where did you get the complementary filter algorithm? I haven't been able to find any 9DOF complementary filter references and I don't fully understand the motivation for blending the reference frames the way you do.

The 9DOF filter algorithm was an adaptation of Madgwicks original algorithm from his paper. In his paper he defines an error gradient step toward the best aligned magnetometer and accelerometer readings that is blended with the current integrated gyro values based on a 'beta' error scale.

image

The problem is that if your step can't out pace your gyro error you get yaw drift. In my version I basically say:

1) Find the best fit orientation for the current accelerometer/gyro readings by following the gradient all the way down (rather than just take a single step). 2) Blend the result against the integrated gyro readings with an arbitrary blend parameter that feels good (rather than trying and base the blend off of the gyro error, Beta).

I call it a complimentary filter because it mirrors the spirit of other low pass filters:

From http://www.glassercommunications.com/paul/samples/filters_for_fusion.pdf

The term “complementary filter” is often casually used in the literature to refer to any digital algorithm that serves to “blend” or “fuse” similar or redundant data from different sensors to achieve a robust estimate of a single state variable.

Since the best fit magnetometer/accelerometer alignment is much noisier than the integrated gyro data the blend weight is low.

After I can get a good baseline using the complementary filter in Python, I'll continue on the UKF. I doubt I'll have it done before you start working on things but that's fine. I'll stay only in Python and work on tweaking the details of the baseline covariances and covariance updates and the other algorithm details. The results of this work can feed back into the UKF C++ implementation that you work on in parallel.

That sounds great to me. I still have to integrate all of the DS4 stuff over. Then I can start trying to port the python Kalman filter to C++ and integrate into the filtering frame work we have now, which will need some re-work since the position and orientation filter are separate entities at the moment.

HipsterSloth commented 8 years ago

@cboulay Sorry if you stated this earlier, but we're you ok with adding sfwa/ukf as thirdparty library to our project?

Near as I can tell (in the 15 min I've been looking at), I looks like it's pretty much what we want minus having to fill in our own state and measurement model. I was planning on adding it on the ukf branch, but I wanted to check with you in case you wanted to use something else / write our own implementation. My plan was to add that in, update our cmakelists script to pull in the needed cpp/.h files into the psmoveservice, then update our position,orientation filter framework to use with the UKF api.

cboulay commented 8 years ago

That seems reasonable. I'm not yet sure if it's feature-complete for what we want, but it's tiny and flexible so we can probably change it if needed. However, I have to admit that its C++-ease is out of my comfort zone. I know how templates work but I don't enjoy reading code that makes heavy use of them.

HipsterSloth commented 8 years ago

Yeah I guess they decided to go to the template route instead of the interface or function pointer route to keep the library generic and make the run time implementation fast (static polymorphism at the cost of readability). I can treat it as a shim for now to get something up and running. Once the measurement model and state are settled I can bake in all the types, pulling in the relevant code and removing the usage of the library.

EDIT: I just realized it's header only too (noticed the .cpp files were only for tests). Which is kind of nice.

cboulay commented 8 years ago

Just wanted to share a couple resources. van der Merwe and Wan, the authors primarily responsible for sigma-point Kalman filters (SPKF, a superset including UKF), have several good papers on the topic. A couple of them can be found on this page of KF resources.

One in particular that I find to be pretty good is Sigma-Point Kalman Filters for Nonlinear Estimation and Sensor-Fusion - Applications to Integrated Navigation. Section IV is particularly relevant, as it describes the process model, the observation model (including a positional tracker (GPS) that is slightly offset from the body-frame). This section also describes how to incorporate a lagged positional tracker without recalculating all frames since the frame corresponding to the time of the measurement (i.e., my idea from post 5 above) with further details in Appendix B. While the paper has some pseudocode, the authors also have a Matlab toolbox. The original website is offline, but I found another website that still seems to be hosting it here.

The set of papers from these authors, and the Matlab code, are about 15 years old so there's probably better stuff out there, but this is a detailed explanation of the methods, with some pseudocode, and with some real code to go along, so it's helping me get through some of the more complicated bits.

HipsterSloth commented 8 years ago

Thanks for these! I just started in on fleshing out kalman filter containers for the psmove and ds4 last night (implements IPoseFilter and hide the UKF details). No model/state estimation code yet of couse (i.e. the "hard part"). As I was writing out the stub I remembered your questions about handling the sensor data at different rates. I noticed UKF supported either a fixed sensor update or a dynamic rate update (sensors can post updates a different rates). I should read up on how they deal with lagged positional trackers. If I can let the sensor packets continue to drive the filter update and use the position tracking with age timestamps that would keep things simpler than having to drive a filter update every time a tracker got a new frame (gets complicated with multiple trackers).

cboulay commented 8 years ago

I haven't looked at any of the multi-tracker stuff in a while. I was thinking it was working as a single positional estimate generated from one or more cameras (i.e., use triangulation when multiple cameras are available, otherwise use the cone fitting). So the filter would only be updated whenever a new optical_position observation was available, not on every frame.

But I never thought about the fact that the multiple cameras are not synchronized. That's a whole other can'o'worms. Is it even possible to sync the PSEye cameras?

HipsterSloth commented 8 years ago

Oh sorry I wasn't clear. Yeah it's currently implemented such that when multiple cameras are available you get a single position estimate. The most recent combined position estimate is passed into filter update packet along with the sensor data. Currently we aren't taking the age of the optical position estimate into account. I have no idea how you would sync the cameras beyond polling them at the same time.

cboulay commented 8 years ago

Can they be polled at the same time? Is polling non-blocking?

Is the 'recent combined position estimate' updated (or marked as 'fresh') after a single camera returns a new frame, or only after all N cameras return a new frame? I assume it's the latter, then I don't understand what you meant by than having to drive a filter update every time a tracker got a new frame.

cboulay commented 8 years ago

Where do the 3D models of the PSMove controller put the 0,0,0 point? I assume in the middle of the bulb, because that's the position our optical tracking algorithms are returning. I'm OK with using the middle of the bulb, but we should correct for the IMU offset in sensor fusion.

According to ifixit, the MEMS (mag) is up near the bulb, the gyro is in the skinniest part of the controller, and the accelerometer is in between (closer to gyro than bulb).

HipsterSloth commented 8 years ago

Can they be polled at the same time? Is polling non-blocking?

When I wrote that comment I assumed that polling the camera frame was non-blocking and that they were all polled at the same time. More on that in a second.

Is the 'recent combined position estimate' updated (or marked as 'fresh') after a single camera returns a new frame, or only after all N cameras return a new frame? I assume it's the latter, then I don't understand what you meant by than having to drive a filter update every time a tracker got a new frame.

We currently build the combined position estimate from the last frame received from each tracker, so long as the frame isn't older than an optical tracker timeout (I think 50ms?). The only thing the "new frame flag" (i.e. tracker->getHasUnpublishedState()) tells us is whether we need to bother with doing the work of computing a new tracker relative pose or just re-use the old one.

The alternative (one that I'm hoping to avoid because it's more complicated and would involve reworking a bunch of stuff) would be to replace the device polling style update with callbacks. Every time the ps3eye driver's urb request callback finishes a video frame or hid_read returns (set to blocking and running in another thread) we could execute a callback to update the controller's filter with a partial state update ("here's a new optical position" or "here's some new IMU state").

It occurred to me that it's been a while since I actually looked at what gets updated when. So I just decided to double check some stuff:

1) We poll all devices in DeviceManager::update(). I noticed we polled the controller first, then the trackers. I think we want to actually flip that around so that we have the latest optical frame for the controller update?

2) Polling a camera frame is actually a blocking operation. I forgot about this. If the video frame queue doesn't have any data in it the main thread will block until there is data. This doesn't necessarily mean getting a frame means that it's the most recent either. There could be a new frame in the queue as well. I'm personally not a fan of either of those properties. In the ps4camera branch I actually have a version of ps3eye driver that is completely non-blocking and will repoll the frame if in the process of reading a frame a new frame arrived.

Where do the 3D models of the PSMove controller put the 0,0,0 point? I assume in the middle of the bulb, because that's the position our optical tracking algorithms are returning. I'm OK with using the middle of the bulb, but we should correct for the IMU displacement in sensor fusion.

Yeah it uses the middle of the bulb. That is a great point about the offset though. I wonder if it's better to center the model on the most accurate sensor (the gyro) or the least accurate (the accelerometer)? Or does it matter so long you can describe what the offset is correctly in the filter model?

cboulay commented 8 years ago

Polling a camera frame is actually a blocking operation.

OK, that explains why every time I called PSM_Update I was guaranteed to get updated positions and sensor data, not just one or the other. example here (columns are accel xyz, gyro xyz, mag xyz, pos xyz, est_orient wxyz, time in secs).

1) We poll all devices in DeviceManager::update(). I noticed we polled the controller first, then the trackers. I think we want to actually flip that around so that we have the latest optical frame for the controller update?

If I recall correctly, the logic for that was so we knew the LEDs were set to the correct values for each controller before we processed the image.

In hindsight, the "correct" way to do things would be to asynchronously fetch new data then have the callback update the in-service's raw data -> derived data ->update filter. This is true for both the optical tracker and for the IMU. The process function would be updating the filter state at over 150 Hz with small time increments, and the observation function would have to be broken into parts: 1 for optical position, 1 for IMU sensor data. Luckily everyone in the field is designing for embedded systems thus these things are pretty fast.

HipsterSloth commented 8 years ago

I got an initial stab at a kalman filter for the the psmove and the ds4 implemented: https://github.com/cboulay/PSMoveService/commit/d1c31c00c142b142d9cf68d8b6411a536dfac40a

Still a lot to do. I just wanted to get the plumbing in place so that we could iterate on the model.

First and foremost, I gave up on sfwa/ukf. I couldn't get it to build in MSVC 2015. It wanted a patched build of eigen, but it turns out there is no standard patch command in windows. Even when manually applying the patch (which was just a one line change) I got a couple hundred template related compile errors. I tried fixing a few before it became apparent that it was going to require quit a bit of refactoring. I think they were using some of C++14'isms that don't seem to work in MSVC (usages of the "using" and "constexpr" statements).

I found another header only kalman filter library that had a UKF implementation (as well as EKF): https://github.com/mherb/kalman. It also doesn't drink quite so deeply of the template madness. It uses inheritance to let the user define the details of their filter. I found their robot motion filter example to be more readable than the cukf example.

Last night I filled in the implementation of KalmanPoseFilter.cpp/.h. This was basically me just adapting the Robot1 example from mherb/kalman:

Some big questions still remain, but now that we have the framework in place at least we can iterate on them:

I also read over the sigma point kalman filter paper over the weekend. I need to read it a few more times since a bunch of it went right over my head, but one thing stuck out at me about the time delayed GPS measurements. They said they were dealing with a 50ms delay. I suspect the optical tracking isn't that far behind (probably 15-20ms at the worst). I think we should do the work first to try and deduce what the latency is and then do a simple prediction on the optical reading before passing it into the kalman filter. For the time being though, I think assuming there is zero optical latency until we get the rest of the filter model nailed down.

Also none of this code should be treated as gospel so if there is anything you don't like, doesn't make sense, please feel free to call it out.

cboulay commented 8 years ago

As we speak, I'm working on the wiki that will hopefully help guide us through this. It's not yet done.

Without looking at the code, there are a couple comments above that caught my attention.

If a measurement is available (i.e. optical position) I replace the predicted value from the measured value

This is incorrect. Your next point is closer. The observation model should only be predicting the sensor values for the latest state estimate. It should be up to other functions to compare the predicted (expected) sensor values to the real sensor values.

What is the right way to handle unavailable measurements?

Maybe some cycles can be saved by not predicting them in the observation model. Then the error and covariance calculation should ignore these sensors. So the different functions should be broken up into per-sensor functions and the covariance matrix should be constrained so the cross-sensor covariance is always zero. I might be able to pick this out after I look at the code.

LinearizedMeasurementModel

Sounds very EKF-like and not UKF-like. Maybe they just reused a name.

Should the gryo and accelerometer be used as a control vector?

I'm operating as though they are dumb sensors and I'm using no input; that's what's done in many of the papers I'm looking at. They could be used as inputs, but I'm worried they are a little too noisy for that. If we get latency problems then using these as inputs should help.

cboulay commented 8 years ago

I just took a quick look at the robot example. They are using both EKF and UKF in that example. Anything related to 'Jacobians' and 'LinearizedX' is needed only for the EKF. We want to use the SquareRootUnscentedKalmanFilter.

cboulay commented 8 years ago

I just went through the mherb/kalman repository and, as far as I can tell, there is nowhere to provide any info about the noise. The only parameters for the SRUKF are alpha, beta, kappa.

In other implementations I've looked at we at least have to provide an initial estimate of the state and its covariance, and optionally update the noise at every step.

HipsterSloth commented 8 years ago

I think the SquareRootBase::getCovarianceSquareRoot() method called in predict might be where the noise comes from for the process model? Similarly the the getCovarianceSquareRoot call in update provides the noise for the observation model? I believe you have to fill in the implementation getCovarianceSquareRoot in the derived version of the system model and measurement model classes. The measurement and system model noise parameters could then be updated at every step here. Does that sounds right?

Thanks for that wiki entry BTW. I'm reading through it now, but am distracted with some other work stuff at the moment.

cboulay commented 8 years ago

We don't want to re-implement getCovarianceSquareRoot, instead we just want to call setCovariance at the appropriate time. i.e., set the process covariance at the beginning, and optionally before every predict if we think it should be dynamic, and set the observation covariance at the beginning and optionally before every update.

Edit: setCovariance(covariance) calls S.compute(covariance). S is a CovarianceSquareRoot object which is just a Cholesky object. This does not implement compute() itself, but inherits from Eigen::Cholesky::LLT, which defines compute here. From what I can tell, the input argument should be a full covariance matrix, not the sqrt of the covariance. But I'm not sure. If I'm correct then calculating the sqrt of the covariance on each update will incur some cost and maybe we should avoid it if it doesn't provide much benefit.

I updated the Wiki entry with an outline of the SRUKF steps, and bolded the parts that we provide. Note that this isn't the only algorithm, in fact the same authors used a different algorithm in a different paper that made different use of the noise (see the added note after the step-by-step).

cboulay commented 8 years ago

Here's a breakdown of mherb's SRUKF, with some notes.:

  1. Initialize state with ukf.init(initial_state)
  2. Predict step: ufk.predict(system_model, control_input)
    • computeSigmaPoints()
      • Uses member variable S (sqrt state covariance matrix)
      • Updates member variable sigmaStatePoints
    • x = computeStatePrediction(system_model, control_input)
      • computeSigmaPointTransition(system_model, control_input)
      • Uses system_model.f(), aka process function
      • return computePredictionFromSigmaPoints<State>(sigmaStatePoints)
      • Just a weighted sum of sigma points.
    • computeCovarianceSquareRootFromSigmaPoints(x, sigmaStatePoints, system_model.S, this.S)
      • system_model.S is used as const noise_cov. This could be updated dynamically with system_model.setCovariance(newS)
      • this.S is regular state covariance that gets updated.
      • Builds augmented matrix from sigma points and noise covariance.
      • Calculates covariance from qr-decomposition and cholupdate of augmented matrix and stores in this.S.
    • return getState() (which is just x).
  3. Update step ukf.update(measurement_model, measurement)
    • computeSigmaPoints() (like above, but now using predicted state)
    • measurement_y = computeMeasurementPrediction(measurement_model, out_sigmaMeasurementPoints)
      • computeSigmaPointMeasurements(measurement_model, sigmaMeasurementPoints)
      • Runs sigmaStatePoints through measurement_model.h(), aka observation function and puts result in out_sigmaMeasurementPoints.
      • return computePredictionFromSigmaPoints(sigmaMeasurementPoints)
      • Just a weighted sum of measurement sigma points.
    • computeCovarianceSquareRootFromSigmaPoints(measurement_y, sigmaMeasurementPoints, measurement_model.S, output innovation covariance S_y)
      • Same as above for predict step.
      • measurement_model.S could be updated dynamically with measurement_model.setCovariance(newS)
    • computeKalmanGain(measurement_y, sigmaMeasurementPoints, S_y, K)
    • Update state with Kalman gain on innovations x += K * ( observations_z - measurement_y )
    • updateStateCovariance(K, S_y)

This is identical to the algorithm I describe in the Wiki (taken from the same paper).

HipsterSloth commented 8 years ago

Thanks for the outline. That's gives me some peace of mind to know it lined up with the SRUKF paper.

This is incorrect. Your next point is closer. The observation model should only be predicting the sensor values for the latest state estimate. It should be up to other functions to compare the predicted (expected) sensor values to the real sensor values.

After looking at this again I realized I was passing in the measurement value to ukf.update() and the innovation would be the computed correctly. We only get a zero innovation if we don't have a measurement to use and then I fall back to passing in the prediction, which sounds like that part probably isn't the right thing to do.

(LinearizedMeasurementModel) Sounds very EKF-like and not UKF-like. Maybe they just reused a name.

Yeah turns out I was using the MeasurementModel class and SystemModel class. The linearized ones have jacobians the UKF doesn't need. I switched the two models.

We want to use the SquareRootUnscentedKalmanFilter. Switched this over as well.

I'm operating as though they are dumb sensors and I'm using no input; that's what's done in many of the papers I'm looking at. They could be used as inputs, but I'm worried they are a little too noisy for that. If we get latency problems then using these as inputs should help.

It doesn't look like there is another way to get the gyro and accelerometer values into the process function without using the control_input parameter (that I could see). Though we are deriving our own SystemState class so I could add setters for these sensor values before we call predict (with a zero control vector), but given that we get to define how we apply the control value in process function this is effective the same at the setting solution correct? Or is the control vector transformed in some way before getting passed into the process function?

We don't want to re-implement getCovarianceSquareRoot, instead we just want to call setCovariance at the appropriate time

I refactored the code to call setCovariance at initialization for the process covariance and the measurement covariance (though the values passed in are just a random defaults at the moment). I added places to set the covariance during update, but it's commented out at the moment. Looks like I need to make a calibration tool to compute the measurement covariances (optical and magnetometer).

cboulay commented 8 years ago

It doesn't look like there is another way to get the gyro and accelerometer values into the process function without using the control_input parameter (that I could see).

I don't understand why you want them in the process function. They don't need to be there at all. I've seen one paper that did it that way, but only one of the 7 or so I've looked at. I may have some old notes in my Python scripts that suggested we do that; please ignore those. My Python code has changed quite a bit (but still not settled enough to push).

HipsterSloth commented 8 years ago

I don't understand why you want them in the process function.

I just realized I've been totally misunderstanding the process/measurement model. I though we needed the sensor values passed into the process model to update the physics. But that is supposed to be updated by the acceleration (linear/angular) in the process model. I guess the sensor values are used in the measurement model? (I just looked back at some earlier notes of yours).

Sorry about that misunderstanding on my part. I have some code to tweak...

cboulay commented 8 years ago

No problem. This whole thing only started clicking for me a couple days ago. Sorry if I led you astray along the way. I'm continually updating the Wiki so maybe check back there if you're confused. I'm close to converging on a process model and observation model.

BTW, I just looked at the Matlab toolbox and their srukf.m uses the algorithm I mention in the "Note" below the algorithm on the Wiki, which is not the same as used by mherb. I asked him a question about this here.

cboulay commented 8 years ago

pykalman isn't working out for me. The main problem is that it doesn't allow specifying the timesteps.

I found another great resource. It's not a square-root transform but just a vanilla UKF. That's OK for the Python work because we aren't worried about performance. Excellent Jupyter notebook and repository.

HipsterSloth commented 8 years ago

Wow that is a really clean write-up. It also partially answered a question for me about how to specify the process noise matrix, Q:

Finally, let's assume that the process noise can be represented by the discrete white noise model - that is, that over each time period the acceleration is constant. We can use FilterPy's Q_discrete_white_noise() to create this matrix for us, but for review the matrix is

image

Is the variance they use there just a parameter you have to tune yourself?

cboulay commented 8 years ago

Yes, the variance parameter is arbitrary. There are probably some ways to calculate the parameter online but I think we can hard-code it.

In this example, the process noise uses a model of constant acceleration, so they get the discrete white noise using the above equation for x and vel_x, and again for y and vel_y. Each dimension is independent, but the covariance within a dimension is not independent. In other words, if you want to use something like the above to calculate the process noise, either make sure the state variable is arranged like [x vel_x acc_x y vel_y acc_y z vel_z acc_z ...], or calculate the above matrix separately then index it to fill in Q in the appropriate rows and columns.

I don't know how to build Q for the orientation and angular velocity parts.

cboulay commented 8 years ago

I have something sorta working.

myfirst_ukf

HipsterSloth commented 8 years ago

Nice work! What did you do the graphing in? Is that a python plugin or is this in matlab? In either case that's awesome to see the filter producing sensible data. I presume this is only has the position model in place correct?

cboulay commented 8 years ago

It's in Python. It should be generated automatically when running the test_filterpy.py script, now pushed to the client_capi branch.

It includes a very basic & definitely incorrect orientation model that I didn't visualize, so actually I don't know if that's any good or not. I don't think the library I'm using handles partial observations so it's hard for me to remove some optical tracker data points and see what happens.

In the script there are a few TODOs inline and a few at the bottom. A couple of these TODOs relate directly to the problem of calculating mean and difference of quaternions. This library allows us to supply a custom function for calculating state mean and differences though I haven't done that. The C++ one does not, I don't think, but I guess we can re-implement those functions.

HipsterSloth commented 8 years ago

Is computing the "state mean" of a set of quaternions the same as computing the weighted average of a list of quaternions? If so I have a method for that in MathAlignment.cpp, eigen_quaternion_compute_weighted_average, that implements the algorithm from here: http://stackoverflow.com/questions/12374087/average-of-multiple-quaternions EDIT: specifically this paper

cboulay commented 8 years ago

Yes, it is. Thanks. I added in the function for that and another for computing the difference. Unfortunately, including either of these in the model causes the cholesky factorization to crash with numpy.linalg.linalg.LinAlgError: 10-th leading minor not positive definite. More work needs to be done to bring the quaternion math into every step.

The Enayati thesis has some steps for dealing with quaternions.

Incidentally, a thesis by Baldassano has matlab code in appendix B for a SRUKF (same algorithm) using an IMU. He doesn't do any quaternion math there. Unfortunately he doesn't explicitly list his state variables so I can't be sure he's even using a quaternion.

I suppose I could use yaw,pitch,roll instead of the quaternion, and then manually deal with angles wrapping around 2*pi when adding and subtracting.