Closed superjax closed 8 years ago
I've got the temperature reading in MPU6050. I'll pull it into Simon's library soon.
It appears that the procedure for doing temperature compensation is taking a bunch of measurements of the bias at different temperatures, then finding a least-squares approximation of the bias vs. temperature for all the axes. We already have the correct_imu
function in sensors.c.
I think that it wouldn't be too bad to throw in a temperature compensation term on each axis. It seems that most people use a linear compensation, while some people have gone for quadratic. I'm not sure which we should do, but I'll get started on implementing that.
@dpkoch. Do you have an automated calibration already set up? If you do, I expect that it would be faster for me to modify your calibration code than write my own.
Yeah we definitely need to do something about the calibration, and I think temperature compensation is a big part of it. I just started with a room temperature Naze (unplugged), then plugged it in and recorded IMU data for an hour while it sat on the counter. Check out the plots:
So the biases definitely drift over time, which I assume is due to change in temperature from warming up (I didn't have a way to stream the IMU temperature at the moment, but it'd be interesting to try again with that measurement). The accel z bias is particularly bad, and takes almost 20 minutes to stabilize.
The accel calibration code that I have is on my Github (imu_calib). It uses a least squares method to compute the biases, scale correction, and cross-axis misalignment terms. The least squares stuff as it stands is going to be too much for the Naze. The other issue we've been having with it is that it seems like it's pretty sensitive to having very accurate alignments (6 directions, like the pixhawk) during calibration. The lack of temperature compensation is also not helping.
In terms of what we should do for temperature compensation, the problem I see with doing our own is that the calibration process all of a sudden becomes a lot more complicated/involved. I've been trying to read through the DMP library documentation, and they keep referring to provided software temperature compensation that's done on the MCU. I haven't been able to find where that happens or how to use it yet, but my initial thought is that if they've got a method with all of the right parameters already set up, we should probably try to use that. It may not be as good as a complete calibration for each individual MPU-6050, but I bet it's pretty decent.
What do you think?
I agree. I have read some really good things about the compensation in the DMP. There is a guy who compared the output of the MPU6050 DMP with the Microstrain 3DM-GX2, and he said that they were pretty much identical. I know that each MPU6050 has internal factory calibration, which is available to the DMP, and that's how they get good temperature compensation and the whole thing.
I would be a fan of trying to get the DMP working. I have actually done it before, using Jeff Rowberg's i2cdevlib for the Arduino. Apparently, since then the official documentation for the DMP has been released, and you can get it through the "developer's site" on Invensense. I downloaded the DMP SDK, and that's what I have linked above. I'm not quite sure how to get it working, but I think that it would totally be worth a shot. I also am sure that Simon would pull the DMP into his library if we got it working.
People do say that 90% of the calibration can be taken care of via temperature compensation though, and most people say that just bias calibration is good enough, rather than including scale and everything. What we need to do is measure the bias as it warms up (like you've just done), then perform a least squares to find a linear fit between bias and temperature. If we are feeling really ambitious, we can cycle the temperature a few times to get a bunch of measurements and use those to calibrate. I think that would be easier to start, certainly, we could get that going in a few days, as opposed to the DMP, which could take as long as a week.
What do you think? Should we worry about it, or just go straight for the DMP. In my opinion, the DMP is the right way to do it, while a manual temperature compensation is the quick and dirty way to do it.
Okay, I'm 90% sure I have the DMP working. Do you think you could try the same test as before, except using the DMP libraries I have on the DMP branch of breezy on my github?
I've gotta go home today, but I'll try to clean it up a bit before tomorrow morning
Okay, some more digging. I am certain that the DMP is working, and the measurements don't wander like they used to. However, the DMP is limited to 200Hz. A pretty big drawback if you ask me.
I'm going to look into trying to manually perform temperature calibration.
You sir, are a wizard.
Well that's great you got it working, but yeah 200Hz is a pretty big limitation. If that weren't the case I think I would vote for the DMP approach, but that's probably a deal breaker. The other issue that I'm concerned about is licensing; their agreement specifically prohibits inclusion in an open source project using the GNU license. I'm not sure if the BSD license or something else would be allowable or not.
So it looks like we're probably on our own. If I understand correctly, that means we'll do some characterization ourselves to get the linear (or perhaps higher order) coefficients for the temperature compensation, then hard-code them into the firmware (possibly as parameters, although if someone wants to get that deep perhaps they should just modify the source code at that point). Then the end user would do a bias calibration but would not redo the temperature calibration. I'm pretty sure that's what's going on with the DMP stuff anyway. Do I have that right?
Assuming I'm in the right ballpark, my biggest concern would be sample size. Will we be able to collect enough data under enough conditions and from enough IMUs for our calibration to be statistically worthwhile across all devices? What are your thoughts?
I just recorded some data from the Accelerometer with the DMP running. It looks pretty nice... I still think that the 200Hz is a bit of deal breaker, and I didn't know about the licensing agreements.
Looks pretty nice doesn't it?
Oh man, that looks awesome. Well... from reading I'm pretty sure that the temperature compensation happens in software on the MCU side. Do think we could see where they're doing it and... draw some inspiration... from their approach? Or is 200 Hz good enough? Or should we just see what we can do on our own and see how it compares?
I have done some reading, and I'll find you the references in a little bit (sorry, should have written them down). It turns out that MEMS gyros have a linear relationship between bias and temperature (a relationship based on geometry and FEA analysis and stuff...). What I'm thinking is that, we could have the same accelerometer calibration thing, but at the beginning, just wait for a minute or two at the beginning while the thing warms up. During that time, we could calculate the linear offset for the accelerometer, and do the whole least-squares thing in ROS. Then, after that is done, we send the temperature compensation parameters to the Naze, where it adjusts it's published values, compensated for temperature, and the accel calibration continues with the 6-axes. What do you think?
That data was collected directly from the FIFO buffer on the MPU and printed to the screen, the temperature compensation definitely is happening within the DMP. However, like I said in my last comment, the linear relationship holds for a really wide range of temperatures. If we do the least-squares fit on even two or three degrees as it warms up, we should actually have a really good idea of how to compensate for temperature across the range.
I would say, let's do our own and compare it with the DMP. I would really like to see a 1000Hz update rate as clean as the DMP is.
Okay, that's good to know about the linear relationship. Does that the slope of that relationship change between units, or is pretty consistent? Also in your data, I noticed that there were still some biases in x and y (and possibly z, I just don't know the units). Is that because the IMU was not perfectly level, or would we still have to do some bias adjustment even with the DMP?
For the average quick-start type user, I'd be a little concerned about making the calibration routine take too long or be too complicated (having to start with cool IMU and then letting it sit still for several minutes while it warms up). I'm thinking that if things are consistent enough between units, then we could just have default temperature compensation parameters that would be good enough for 99% of users. For the other 1% we could make the temperature calibration routine available, but it wouldn't be mandatory. What do you think?
As far as the 6 axes part of the calibration, I'm not sure what the best approach is there. I've been playing with calibration for the past week with mixed results, although part of that could be temperature effects. You only need to do all 6 axes if you want to calibrate scale and cross-axis misalignment. In my experience these last few days it seems like that type of calibration is very sensitive to having accurate positioning, although I'm not sure how the Pixhawk gets around that. I would think that we at least probably don't need to compute the cross-axis terms. As for scale, I'm not sure how big of a difference that would make; it seems to be quite good as is. So if all we want to do is biases, we could potentially just do it one orientation, sitting on the ground. I'm really not sure yet what the best approach is; I'll keep looking into it.
Also, I think what we really want is one of these. Pretty sweet, right?
That's a good point about the calibration routine. I bet that the temperature compensation is pretty good across the board. In fact, the DMP's temperature calibration is static anyway, (you have to flash the binary of the DMP to the MPU6050, and then you can tweak the calibration parameters). That's why there was some bias still after using the DMP.
I expect that if we get a really good temperature calibration on one of our Nazes, then it'll apply to all nazes. The static offsets tend to be different between IMUs, so we would still have to get those, but that calibration is pretty straight-forward.
By the way, to collect my data, I just flashed the accelDMP example to my naze, piped the output of miniterm.py to a file and walked away. In case you were wondering. I figured it was easier than configuring MAVlink to handle temperature.
Oooh, fancy.
Okay cool, maybe let's try going that route and see how it goes. And that's good to know; I actually was wondering as a matter of fact.
Okay, here are some references Polynomial Degree Determination for Temperature Dependent Error Compensation of Inertial Sensors SLUGS UAV Temperature Compensation ITG Arduino Driver Issue Thread I still can't find the one that did the FEA analysis and said that it was linear. Sorry, when I find it I'll post it here.
So, it looks like the Paparazzi autopilot on startup takes two measurement, spaced about a second apart and extrapolates the temperature compensation from them on startup. I think that we could do that on the naze even. I'll probably work on it some more this afternoon.
Cool, thanks.
Here are a couple of application notes I've been looking at for calibration methods:
Alright, here is the linear temperature compensation results: Notice that the scale on the gyro is super super low. That's why it seems to curve after calibration. (that would be truncated out on the Naze)
The offsets for this test were as follows
accel_m[0] = -0.0012;
accel_b[0] = 0.2394305;
accel_m[1] = 0.0028;
accel_b[1] = -0.2601385;
accel_m[2] = -0.2;
accel_b[2] = 5.1719;
gyro_m[0] = 0.0000008;
gyro_b[0] = -0.0374724;
gyro_m[1] = 0.0000016;
gyro_b[1] = -0.0353550;
gyro_m[2] = -0.0000;
gyro_b[2] = 0.0082024;
with the standard bias = m*temp + b
form in standard units (g's,deg/s, and deg C).
This is after recording for about 3 minutes. I need to play with how much data we need in order to get a good calibration, but I think this will work. The linear model seems to be pretty good.
Okay, I played around a little bit with how much data we need to gather to get a good calibration. Here is if we calibrate for 5 seconds.
Here is if we calibrate for 3 seconds
And here is 0.5 seconds
Obviously, we get better calibrations if we let it sit longer. I don't think that would be too hard to have it calibrate for 5 or 10 seconds, to get the temperature offset. The question I guess is whether or not it holds for a wide range of temperatures. I'll try that next (I've got some canned air and a lighter)
Okay, I think that this'll work.
Here is a test where I grabbed some canned freon and a lighter to cycle the temperature while (trying) not to move it.
The big spikes in the data are the motions that happened with I sprayed it, and it seems like there is a bit of a delay in the temperature reading (wouldn't normally happen) but check out especially the z-accel. A simple linear compensation makes a big difference.
This particular test was calibrated on the first 5 seconds of data. The rest of the data is working off of that calibration
Closed by #36
We need a good way to make the IMU calibration better.
Ideas: Temperature Compensation? http://www.i2cdevlib.com/forums/topic/53-mpu-6050-temperature-compensation/
Using the DMP? motion_driver_6.12.zip