bitcraze / crazyflie-firmware

The main firmware for the Crazyflie Nano Quadcopter, Crazyflie Bolt Quadcopter and Roadrunner Positioning Tag.
GNU General Public License v3.0
1.19k stars 1.06k forks source link

Add support for positioning using only one Lighthouse base station #461

Closed krichardsson closed 4 years ago

krichardsson commented 5 years ago

The current Lighthouse positioning solution is based on finding the crossing of vectors from two base stations. It should be possible to find the full pose for a Lighthouse deck using data from only one base station though, and this issue is about how to implement this functionality.

Ideally I think this should be integrated into the kalman filter to be fused with other sensor data.

An initial simplification could be to only extract the position from the light house deck and ignore roll/pitch/yaw (the current solution works this way).

Some ideas to get started:

NicksonYap commented 5 years ago

@krichardsson without confirming the calibration's validity

is the accuracy still good enough for single base station implementation?

Knowing that there may still be distortion due to lack of calibration

krichardsson commented 5 years ago

@NicksonYap Yes, I think it should work.

NicksonYap commented 5 years ago

Sharing my findings so far:

libsurvive has a "Poser" called PoserEPNP in poser_epnp.c (essentially one of the implementations for finding lighthouse position, and vice versa, finding sensor's locations)

About EPnP/PnP: https://en.wikipedia.org/wiki/Perspective-n-Point#EPnP https://www.youtube.com/watch?v=0JGC5hZYCVE

For PnP, given reception from only 1 base station:

If there is reception for at least 3 sensors, it can produce a full 6DOF pose (XYZ, RPY) If there is reception for only 2 sensors, it can produce a 4DOF pose (XY, RP) If there is reception for only 1 sensor, it can produce 2DOF pose (XY or XZ or YZ) If there is reception for only 1 sensor, but only 1 sweep in either X or Y direction is received, then you get 1DOF pose (for the sake of completeness)

It may be possible to also compute in a hybrid manner in case 2 sensors have reception - where sensor 0 can see 2 lighthouses, but sensor 1 can only see 1

libsurvive has other "Posers" but so far PoserEPNP is the simplest to understand and implement. Others require minimizing/ optimizing least squares (not sure if CF can handle)

poser_epnp.c also applies LH calibration / reproject the angles via the function survive_apply_bsd_calibration(), but that should be discussed in https://github.com/bitcraze/crazyflie-firmware/issues/430


Discussions:

@krichardsson @ataffanel For this to work in a reliable manner with any combination of sensors and basestations, it seems that the estimator will need to be able to receive single positional / angular axis with respect to any normal / vector (most likely with respect to the basestations, for our case)

Meaning that even in situations with a single sweep, detected from single sensor, the estimator should be able to make use of this data to make corrections in a single axis

  1. Would you agree that PnP is the algorithm we should be focusing on for single base station positioning?

  2. Given that we're implementing PnP, would you guys agree that handling of LH detections should be done in kalman_core.c instead of lighthouse.c or lighthouse_geom.c? Such that the kalman estimator can update using only a single axis (if this is true, then it'll really be out of my knowledge scope, the best I could do is producing a 4/6DOF pose with 2/3 sensors, and feed into the estimator like how it's implemented now)

  3. Am I right that kalmanCoreUpdateWithTof() is closest to being able to update using only a single axis? The case in Ranger Deck, it is 1 axis (distance) with respect to Crazyflie's frame For our case it's 1 axis (angle) with respect to a base station

  4. Do you think the function for updating the estimator using a single axis will work if multiple axes are detected at the same time? Or that the estimator will need to know how to handle this internally? This is the confusing part


Notes

Surprisingly, I was able to set up my base stations about 10.5m apart, diagonally in a 10x4m space (without sync cables 👍 )

The HMD had no issue detecting and localizing around the space (did not wear it to experience the accuracy)

However the LH Deck did not handle this well, particularly because the angle is a bit steep from a distance, and both base stations need to be seen at the same time, by rotating the CF a bit, I will be able to get full reception (LED Ring deck shows green in Lighthouse Quality Effect mode, just submitted PR)

The CF has too little of a room to rotate before loosing sight from either one of the base stations

Which is why I'm planning to resolder the photodiodes at and angle however it'll work only if the above is implemented.

We can then potentially localize in a 10x10m space (or 10x4m at least) using only the classical V1 base stations. It'll be best if there will be a 8-sensor LH deck

ataffanel commented 5 years ago

Would you agree that PnP is the algorithm we should be focusing on for single base station positioning? PnP is most likely the simplest to start with. My understanding is that it might not be the most efficient for real-time positioning so eventually we might want to look at something else. But as a first approach PnP sounds like a good start.

Given that we're implementing PnP, would you guys agree that handling of LH detections should be done in kalman_core.c instead of lighthouse.c or lighthouse_geom.c? Such that the kalman estimator can update using only a single axis (if this is true, then it'll really be out of my knowledge scope, the best I could do is producing a 4/6DOF pose with 2/3 sensors, and feed into the estimator like how it's implemented now)

Eventually lighthouse should be handled directly in the EKF idealy by pushing individual angles. Though I still do not understand how to make that happen: as you noted we can get 6DOF from the system, this means that each sensor reading will give both a position error and an attitude error information. I have no idea how to express that. The easiest to start with would be to push the position into the Kalman filter from lighthouse.c the same way it is done today.

Am I right that kalmanCoreUpdateWithTof() is closest to being able to update using only a single axis?

kalmanCoreUpdateWithTof() only push a position error into the EKF so I am not sure it will work in this case. As far as I understand It would only work when receiving angle from both basestations.

Do you think the function for updating the estimator using a single axis will work if multiple axes are detected at the same time? Or that the estimator will need to know how to handle this internally? This is the confusing part

For me this is the tricky part: we want to push unique axis to the EKF in such a way that the EKF can recover 6DOF errors from it. My understanding is that when we push data to the EKF we essentially push an error vector on the internal state and the magnitude of the error vector, the error vector corresponds to our measurement Vs/ the current estimate. This is quite easy to reason about when it comes to push position or attitude error (for attitude our EKF makes it a bit tricky though). But in our case, each angles can both comes from an attitude and a position error so I am not sure how that can be modelized.

NicksonYap commented 5 years ago

I've been continuously working for days trying to derive the equation / algorithm for calculating position given any combination of sensor detections and basestations receptions (still requiring both horizonal and vertical sweeps per sensor)

They are based on http://geomalgorithms.com/a07-_distance.html#Distance-between-Lines by re-deriving the equations with a new set of assumptions/constraints

I'm no longer sure if it's still under PnP, but the concept is still the same Effeciency-wise, it should be close to the current implementation as it also uses matrix, just that the calculation requires more operations The goal of the equation is to minimize the error / find the best fit of the 4 sensors in the ray of the basestations. With calculus/differentiation (dy/dx = 0) we get an equation, there is no are for-loops etc etc


Combinations I've worked / I'm working on: a. 1 sensor 2 basestations (XYZ - 3DOF) b. 2 sensors 1 basestation (XY, YP - actually 4DOF but somewhat 6DOF, requires Rotation Matrix from Kalman Filter) c. 2 sensors, 1 basestation each (eg: Sensor 0 - Basestation C, Sensor 2 - Basestation B) d. 3 sensors, 1 basestation each (still working on this) ... and so on

I've successfully derived a single equation that works for combination a, b and c It requires knowing the position of the 4 sensors on the LH deck

Following are plots from Matlab, showing Basestation positions (Triangle) and Detected Rays from BS.

The blue vector between rays is shortest possible distance/segment The green vector between rays is the estimated position and orientation (pose) of the CF Assuming CF is placed facing the X axis (pointing top right in the screenshots) and that the Sensors used are only 0 & 2


a. 1 sensor, 2 basestations (should produce identical result as current implementation)

image Orientation from Green Vector is not valid for 1 sensor, 2 basestations Length of the Green Vector indicates potential error of the measurement, since it's single sensor there should be no distance image


b. 2 sensors, 1 basestation

image Orientation from Green Vector is valid but we use only the one from Rotation Matrix In this case they are close / identical image

As you can see, if the distance from the CF is slightly off, or if the two rays deviate a little, the estimated CF position might be "pushed away" or "pull toward" the Basestation. It is sensitive to errors in this configuration. Increasing distance between sensors will help


c. 2 sensors, 1 basestation each

The following demonstrates how the equation handles a possible error where the Rays are impossibly far apart but gets detected anyhow The equation will try it's best to minimize the error of both the distance from two sensors, and the orientation given by the Kalman Filter image Orientation from Green Vector is valid but we use only the one from Rotation Matrix Here you can see the resulting Orientation Vector does not agree with the Rotation Matrix (from 0 to 2) The difference in Length of Green Vector and distance between sensors indicate potential error. In this case it's quite large image


Discussions

1. Currently the resulting Orientation information from the Rays (Green Vector) for the case of b and c are dropped and I only used the existing Rotation Matrix from Kalman Filter (like Accel, Gyro & Compass). There should be a way to use this Orientation information to update the Kalman Filter, then it should allow CF to be placed in any direction (no longer need to point in X axis) But since the Rotation Matrix is actually required to find the Green Vector anyways, I'm not sure how much to trust the Orientation given by Green Vector. The Green Vector orientation would be different only if the distance & positions of the sensor does not fall perfectly on the Ray, then the equation will try to minimize the error by rotating it. Need some advice on this

Example of 2 sensors, 1 basestation each (this time rays are close together) image

Green Vector Agrees with the Rotation Matrix completely, both in length and in Orientation. Length and Orientation of Green Vector have virtually zero error image

But if the Rotation Matrix says the CF has a Yaw of 90 deg (pointed right) The Rays will show that the sensors will be too far, so the Green Vector is pointing top right instead image

2. Regarding std deviation of the calculated position and attitude, it's possible to obtain it in runtime for each result, based on: a. The accuracy of the timing/angles itself b. The esimated distance from lighthouse (further is less accurate) c. Distance between sensors (can resolve the sensors better if they are far apart) d. ... and more

Once I'm done with getting the equation for all combos, I can then differentiate and get the equation for - "What is the error of the result, given the error of the items above (a, b, c, d)"

3. Regarding the Kalman Filter, kalmanCoreUpdateWithTof() eventually uses scalarUpdate(this, &H, measuredDistance-predictedDistance, tof->stdDev); where it attempts to update only the Z axis by providing the error and the std deviation (essentially the error vector)

Since TOF has no idea on any other positional or angular axis, it does not suggest an error for them and leave them untouched

It seems kalmanCoreUpdateWithPose() updates position and attitude axis by axis. Quaternion is used for attitude. However Quaternion is 4D, but there are only 3 states for it in the kalman filter? D0 for Quat-X, D1 for Quat-Y, D2 for Quat-Z. (Quat-W not used) Are D0, D1, D2 yaw, pitch & roll somehow? Are they in Radians? (i'm not very familiar with Kalman Filters)

if we have a scalarUpdate() for individual positional and angular axis, then I will be able to push updates to the appropriate axis based on whatever information that be gathered from the lighthouses, even if it's only 1 sensor, 1 basestation.

For example if the estimated position of CF from Kalman Filter does not cross the only ray that we detected, we introduce position error vectors pointing to the closest point on that ray, even though we do not know how far the CF is from that basestation Similarly if the detections from LH was able to provide orientation, we'll then also provide an attitude error vector to the Kalman Filter

If the above are possible, then it will work even if only a single sweep is detected

krichardsson commented 5 years ago

@NicksonYap Wow, good work! I think this looks interesting.

  1. I'm not an expert on kalman filters either but we have had the idea that it might be possible to separate the position and rotation updates a/ first trust the rotation matrix and update the position. In this stage we assume the estimated orientation is correct. b/ secondly update the orientation based on the assumption that the estimated position is correct.

  2. It would be possible to add a new update flavour to the kalman filter if needed, the TOF is probably not the right one for this problem. I think the state in this kalman filter is a bit unusual https://github.com/bitcraze/crazyflie-firmware/blob/a07e58f001701910a42c0698c8c59a970210e1b0/src/modules/interface/kalman_core.h#L69-L78 D0-D2 is the attitude error.

NicksonYap commented 5 years ago

@krichardsson

Thanks for the reply,

For num 1. yes, i predicted it would be that way but it's rather odd, it might cause some kind of feedback and overshoot (will try after i get this done)

For num 2. it's blank, a typo?

For num 3., since it's D0, D1, D2 is attitude error, and there are only three, i assume it is Y P R in radians?

NicksonYap commented 5 years ago

@krichardsson @ataffanel

I've managed to implement the formula in MATLAB into CF, turns out there are some heavy calculations required to solve the matrix equation (involving SVD & pseudoinverse) but was turned into a simple function in MATLAB.

The algorithm works by gathering all possible Ray vectors (max 4 sensors * 2 BS = 8 rays) Then get all the possible pairs of rays, calculate the center and average them This will will work no matter the combination of sensors or basestations, as long as a pair of rays can be retrieved. It requires the Rotation Matrix and can only accept 2 rays per position calculation. (I've concluded that any trying to get Position and Pose using >2 rays will require some kind of solver)

Edit: initially i had issues with memory for several days and asked for help, but i had just managed to do it :)


note:

Btw, for single basestation, without highly accurate calibration of the BS and LH deck, it will not be possible to obtain usable position from the rays. Because as seen from the LH deck, the pairs of rays are almost parallel

Angular accuracy needs to be within the magnitude of 0.01 to 0.001 degrees in order to work with 1 basestation

Edit: Based on initial observation I didn't think it was possible to work with single basestation without accurate calibration, however I just tested and it actually works, maybe the heavy averaging (12 times) helped a lot.

Will probably make a branch and share the code here, because it's not really an efficient way to compute position, but it might be the only way to fit into CF

However when switching from 1 basestation to 2 or vice versa, there may be a sudden shift in position (calibration issue)

There are large position glitches once in a while, still looking into it

Accuracy will depend on the distance from the BS, the orientation of the sensors (perpendicular to the rays is best, means LH deck top facing the front of BS) and the distance between the sensors

For single basestation, the best set up might actually be:

  1. To have the base station placed from audience's point of view (likely below)
  2. As close to the CF as possible
  3. Have LH deck facing directly to the Basestation, (facing down if BS is below)
  4. Increase the distance between sensors to 8cm or more (like the new Active Marker deck)
krichardsson commented 5 years ago

Very cool! I'll try to take a look this week.

However when switching from 1 basestation to 2 or vice versa, there may be a sudden shift in position (calibration issue)

When you have access to two basestations, do you calculate the crossing point the same way it is done in the current implementation, and then feed the result to the kalman filter? Another solution would be to calculate the two solutions separately and feed both of them to the kalman filter. I think this would give a smoother transition

There are large position glitches once in a while, still looking into it

We have not seen this. We tried to write the decoder to be as robust as possible, for instance to handle camera flashes. If the decoder can not be hardened we could also add an outlier filter that rejects samples that are suspicious.

For single basestation, the best set up might actually be...

Let's see how far we can take the current deck first. Maybe there will be good reasons to create a different design in the future?

NicksonYap commented 5 years ago

When you have access to two basestations, do you calculate the crossing point the same way it is done in the current implementation, and then feed the result to the kalman filter?

Yes, but to be more specific, there is only one estimatePosition() function to deal with all combinations of basestations and sensors detected (invoked and updates position every time a new frame/sweep is received, not the best way) Then positions from the combinations (pairs of rays) are then averaged up to a limited 12 times to produce only 1 position, and fed to the kalman filter (XYZ) like how it is currently implemented in master (averaged only using 4 pairs of rays)

By "two solutions" do you mean we submit position results to kalman filter separately, for each basestation? We could give it a try, but the difference in accuracy in theory is very large. Using 2 BS is magnitudes more accurate than 1 BS due to the great distance between BS (in dual BS mode) vs the small distance between sensors (in single BS mode) So dropping a single high accuracy position from 2 BS in exchange for two low accuracy position from 1 BS might be quite a big trade off for smoothness, but worth a try

We tried to write the decoder to be as robust as possible

It is surely not due to the sweep/pulse decoder, there's something weird happening due to the code I added. In CF Client plotter, I can see sustained peaks (for half a sec or so) however when I set a breakpoint trying to stop when large values are detected (like 6 meters off), It couldn't. I then just gave it a try to fly as-is, as it just works.

I believe the position values was never glitching, but the plotter somehow plots it that way, in a very consistent manner (fixed intervals and almost the same magnitude)

I'm not sure if it's due to the heavy cpu clock or memory usage, affecting only plotted/transmitted data

Let's see how far we can take the current deck first.

Yeah, the current deck actually works pretty okay even for 1 BS Though I've had some thoughts about a different design, you can let me know when you guys are up for the next revision

krichardsson commented 5 years ago

By "two solutions" do you mean we submit position results to kalman filter separately, for each basestation?

Yes, or possibly 12 in your case :-) One way to look at it is that you leave the averaging to the kalman filter. Your solution is a good start and if it works it might be fine for now, but I think it is more "correct" to feed them one by one.

We could give it a try, but the difference in accuracy in theory is very large. Using 2 BS is magnitudes more accurate than 1 BS due to the great distance between BS (in dual BS mode) vs the small distance between sensors (in single BS mode)

The difference in accuracy is handled by the std deviation parameter in the scalarUpdate() function, it tells the kalman filter how much the sample can be trusted (how noisy the data is). It is even possible to tell it that the sample is more noisy along the axis pointing towards the basestation and less noisy at right angles, by calling scalarUpdate() multiple times. Samples with a low std deviation will have a greater effect on the solution than samples with a higher std deviation.

As mentioned earlier I think this solution will give a nicer transition when one basestation is occluded as the kalman filter hopefully will smooth it out (depending on the std deviation settings).

dolfje commented 4 years ago

Just tested the experimental feature. I don't know the state of this, or if you want to collect information. But I saw this behaviour:

I know this is experimental, but are these behaviours expected because not everything is implemented yet. Or is it helpfull to stresstest the feature?

krichardsson commented 4 years ago

Thanks @dolfje! I mainly committed the code to share it in the team (and with anyone that is interested), and there are still known issues, for instance handling of one base station. I would expect it to work decently with two base stations though, but it is work in progress...

dolfje commented 4 years ago

@krichardsson Okey, then I will leave the interesting part here for future reference or if you want me to do tests. The video of the test with FF_experimental on (https://photos.app.goo.gl/p24PhomFMtxhbCed9) With the flag off, it goes directly up. No -y movement.

krichardsson commented 4 years ago

@dolfje if you want to play with this I hope it will become better and better over time :-) I don't think it is ready for testing yet.

krichardsson commented 4 years ago

@dolfje This should hopefully work now if you want to try it out.

NicksonYap commented 4 years ago

@krichardsson I had been inactive for several months, the issue is closed, does it mean that obtaining position from single basestation has been implemented?

It does look like yaw estimation using single basestation is implemented

krichardsson commented 4 years ago

@NicksonYap Yes, should work with one basestation now. The precision is still better with two basestations though, but the CF will be able to handle loss of one basestation for a while. There are inherent problems in the lighthouse V1 protocol that makes it hard to identify if a pulse is from the master or slave basestation after occlusion that we can not overcome.