Open Chandler opened 3 years ago
Hi @Chandler When you click the Calibrate button in the Viewer, it performs the health-check process and then provides a health check score. At the same time, it gives you the ability to compare a new calibration to an old one before commiting to keeping the new result.
So if you receive a health check score that you find to be unsatisfactory, you can re-do the process until your score is improved and then commit to keep that result by writing it to the camera hardware once you are happy with it.
In regard to your quoted code:
new_calibration, health = cal.run_on_chip_calibration(...)
It looks as though the formatting is the Python version of the code for running on-chip calibration, and new_calibration_health is your own choice of variable name.
So I would expect the above command to result in the running of the on-chip calibration after the pipeline has been configured with a mode (depth 256x144 at 90 FPS) that is suitable for the calibration operation.
It looks to me as though health is a float variable that the health value from the most recent check is stored in. The white paper states that "returned health is a signed value indicating the calibration health".
My understanding is that new represents the new calibration table, and original represents the old calibration table that is currently stored within the camera hardware. So you can toggle between the new health-check value from the latest calibration and the one produced by the camera's existing calibration table and decide whether to store the new calibration to the camera to replace the previous one.
If you need to keep a long-term record of calibration, it is possible to use the Dynamic Calibrator tool to export the calibration details as an xml file before writing a new calibration to the camera, and then export again after writing a new calibration, so that you have a 'before and after' record. Alternatively, you could keep a record of the health check results in a database.
In regard to knowing whether it is worth burning a new calibration based on the health-check result, the white-paper provides a guidance chart.
Thanks marty, I think I understand now what the health float tells me but just for my own sanity and for future readers I've documented what I understand. Here is time series of events:
Time 1: the camera is running and the depth looks good
Time 2: the camera is dropped, potentially affecting calibration
Time 3: the camera is running and we suspect the depth quality might be off
Time 4: I run calibration and save it.
new_calibration, health = cal.run_on_chip_calibration(...)
cal.set_calibration_table(new_calibration)
cal.write_calibration()
Time 5: the camera is running new calibration
You are saying the "health" variable represents the health of the camera at Time 5 and not the health of the camera at Time 3.
If that's true, it means we can never quantitatively know if the new calibration is better than what the camera is currently running. We only ever know the health of the camera with a new calibration, not the health of the currently running system.
We can't ever tell if the health degraded from Time 1 to Time 3 after the camera was dropped all we know is that when we re-ran calibration, we were able to get it back to something good.
Even in realsense-viewer when you toggle between new and original, it changes the visual depth but it does not change the health value displayed. In both cases the health value is the same, presumably it corresponds to the new calibration. But we can't tell if it's an improvement without knowing the health value of the old calibration.
@MartyG-RealSense I am similarly confused by the intrinsic vs extrinsic modes of the on-chip calibration after reading the paper.
If you want to do intrinsic and extrinsic calibration do you do this? @dorodnic I would greatly appreciate your input.
intrinsic_calibration_table, _ = dev.run_on_chip_calibration(json_content='...', timeout_ms=5000) # scan parameter 0
extrinsic_calibration_table, _ = dev.run_on_chip_calibration(json_content='...', timeout_ms=5000) # scan parameter 1
dev.set_calibration_table(intrinsic_calibration_table)
dev.set_calibration_table(extrinsic_calibration_table)
dev.write_calibration()
It's not clear to me if these two calibration tables from the different modes are over writing each other or if each one only updates the appropriate section of the calibration table for intrinsic vs extrinsic coefficients.
The depth examples only show it running the default mode https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/depth_auto_calibration_example.py
but camera intrinsics and camera extrinsics are two separate sets of coefficients so it is important that I understand how to update each of them. Thank you.
I am now wondering if maybe both the "intrinsic" and "extrinsic" calibration modes update the camera intrinsics/extrinsics but they do it in ways that are optimized for one of the two sets of coefficients being the major problem.
I hadn't considered that before.
You are doing a very good job @Chandler of working through the logic of your questions. :)
When we change extrinsic or intrinsics, then that is the only thing updated. For example if you select intrinsic self-calibration, it will ONLY update the right sensor py value.
@agrunnet thank you! so would this be correct to run both? each table write would not interfere with each other
intrinsic_calibration_table, _ = dev.run_on_chip_calibration(json_content='...', timeout_ms=5000) # scan parameter 0
extrinsic_calibration_table, _ = dev.run_on_chip_calibration(json_content='...', timeout_ms=5000) # scan parameter 1
dev.set_calibration_table(intrinsic_calibration_table)
dev.set_calibration_table(extrinsic_calibration_table)
dev.write_calibration()
or do I need two dev.write_calibration() calls
You can run both but it won’t really improve much over just running one. It is hard to know a priori whether the real deviation occurred in the intrinsic or extrinsic parameters. For small deviations they can both individually correct for the decay and improve depth equally. For large scale deteriorations there is a difference but at that point you really should redo calibration all together. So in short, just go with either and not both. If you KNOW that it is most likely to have been an extrinsic error then correct that. For example if someone touches the board it is most likely extrinsic. If they touch the lenses then it is most likely intrinsic. If you don’t know how the performance degraded and just want to improve depth then either will work. I prefer intrinsic but we are still mapping all the different potential stress cases to see which is most likely to need to be corrected.
Hi @Chandler Do you require further assistance with this case, please? Thanks!
@MartyG-RealSense no thank you I am good for now!
You might flag the whitepaper or documentation around onchip-calibration as something that could use clarification. I spent a lot of time with it and I think there are some significant ambiguities about how onchip-calibration works -- which leads to ambiguities about what it can be used for.
In particular
For example the whitepaper mentions that the health check can be used as a diagnostic over time but I can't quite tell what it would be a diagnostic of.
Thanks a lot.
Thanks very much @chandler - I will add a Documentation tag to this case and keep it open so that RealSense team members who handle documentation can look at it in future.
Adding a note to keep this case open for a further time period.
I do agree about the issue here with the documentation of the health value. Also something that I didn't understande is why the health value differ between two in chip calibrations in the same scene. using this code:
while (health < -0.2)
{
res = cal.run_on_chip_calibration(json, &health, [&](const float progress) {});
std::cout << health << std::endl;
}
cal.set_calibration_table(res);
res = cal.run_on_chip_calibration(json, &health, [&](const float progress) {});
std::cout << health << std::endl;
I get these values:
starting calibration
-0.375814
-0.383869
-0.480634
-0.368265
-0.231274
-0.296245
-0.25018
-0.243442
-0.286664
-0.187745
-0.303759
So basicly I have two questions:
Hi @avizipi The standard approach to using the tool is to perform a calibration and check the health check score, then decide whether to save that calibration to the camera hardware or re-perform calibration to try to obtain a better score. The process was not designed with looping in mind, though I can understand why it would be desirable to automate the calibration process like this instead of requiring a manual input about whether to keep or reject each individual health score. So I do not see a problem with your method.
Based on personal experience, I would say that it is possible for a "good" score not to sometimes match up to the appearance of the depth image, which may look less than good.
There is not a detailed description available publicly for how the self-calibration algorithm works, unfortunately.
The minus values for the health check scores seem unusual, as the scale starts at a positive value of '0', with '0.25' or less being a very good score and higher than 0.25 indicating that re-calibration may be needed.
The instruction while (health < -0.2)
may therefore work better if it were while (health > 0.25)
as it would be undesirable to allow the calibration process to run if the health value was < 0.25.
Thank you very much on your replay!
I thougth that health value can be negative and I should only consider the absolute value of it. This issue (https://github.com/IntelRealSense/librealsense/issues/7101) suggest that I do not use the latest SDK.
I upgraded to the highst SDK i can, 2.49. I am dependent on conan. And still I see negative health check values. Is this a problem? should I be worried about this values?
Whilst the documentation for the self calibration expresses the health check values as positive, I confirmed that the RealSense Viewer displays them as minus (negative). So it appears that your negative values are correct in terms of how the scores are output by the self calibration software. I apologize for the confusion.
Hi @avizipi Do you require further assistance with this case, please? Thanks!
I am still wondering if there is a way to get the health score of the current, old, calibration. I didn't find any function that can help with that. Do you know of any such a function? Or maybe can you explain how the health score is calculated that I will implement it myself?
If I am getting it right, there isn't any way in the sdk to understand the quality of the current calibration which is already in used.
While writing these lines I am wondering if I can use the health score to understand better the scene (images) the camera captures. Do you think there is any correlation with the health score and the difficulty of the scene? I mean, giving my calibration is good, a harder scene for the depth calculation will give us a worst health score?
You are correct, the health check score of the previous calibration that was stored to the camera hardware cannot be retrieved. There is also not an available explanation of how the health check score is calculated.
In regard to the impact of the 'hardness' of a scene on the health check score, I would expect any phenomenon that negatively affects the depth image to have a detrimental effect on the score. This could include insufficient lighting in the area that is being depth-analyzed, dark grey / black surfaces or disruptive factors such as fluorescent ceiling lights, reflective surfaces and 'repetitive' (repeating) patterns in the scene such as floor / ceiling tiles.
If you are seeking to analyze the camera's current depth quality then I recommend using the RealSense SDK's Depth Quality Tool, which provides feedback about factors such as RMS Error (the amount of depth measurement error at a particular distance from the camera). Intel have a Camera Depth Testing Methodology PDF guide at the link below that describes how to use the Depth Quality Tool to analyze depth quality.
https://dev.intelrealsense.com/docs/camera-depth-testing-methodology
thank you very much! I will go thru this article and try to specialize this methodology to our special case.
Hello realsense, I have read the calibration white paper carefully and I am uncertain about the meaning of the health check.
https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras?_ga=2.124604938.1478627283.1605640071-925553070.1602275948
"One of the extremely powerful aspects of the self-calibration algorithm is that it also performs an on-chip “Health-Check”, that does not require any specific visual target. Basically, once a self-calibration has been run, the health-check number will indicate the extent to which calibration deviates from ideal" " This health-check number can also be very valuable to some users in allowing for a simple diagnostic that can be monitored over time."
My question is, when I call
Does
health
refer to the health of the camera if it's running the existing calibration table or is it the health of the camera whennew_calibration
is applied?I hope this makes sense because it's a big difference. If the health check describes the health of
new_calibration
and not the health of the existing burned in calibration table, then I'm not sure how I can monitor the health of the camera over time or how to decide whether or not it's worth burning innew_calibration
To phrase it differently, if health is "bad" does that mean 1) I should re-do the onchip calibration routine until it's good or 2) that the existing calibration is bad and I should use the
new_calibration
Thank you for your support as always.