Closed bigcmos closed 9 years ago
We would have to see your setup to speak to this directly (i.e., take a picture). There are a few filters (both temporal and spatial) built into the camera (i.e., mean, median, etc.) but none that I think will deal with what you are concerned with here. So, in general, "yes" you need to write your own "noise filters".
As stated above, I do not know what your setup looks like, but I'd guess that you have surfaces close to the edges of the camera (e.g., the camera sitting on a table). The camera calculates a radial depth and so what you are seeing here is typical of the edge pixels hitting a surface like a table, hand, etc. Again, I'd have to see your setup to speak intelligently to this image.
Also, I think this is beyond the scope of libo3d3xx
unless you believe the software is causing these aberrations but I highly doubt it. You could check to see what the image looks like in IFM Vision Assistant as a comparison.
I'm familiar with the noise in these types of devices around edges of objects, but this device is mounted on a tripod so nothing near the sensor but air. I wasn't expecting to see this much noise around the view frustum. The data from your image here looks gorgeous: https://github.com/lovepark/libo3d3xx/blob/master/doc/100k.md
Is there any calibration that I need to run?
I'll do another test next week and upload a photo of the scene. I can remove these points in software, but I was hoping to have cleaner data from the device.
Also, how do I enable these filters (i.e., mean, median, etc.)?
Can you paste in the output of o3d3xx-dump
? What I am seeing in your image is also symptomatic of the "ambiguous range" inherent in phase-shift computed ToF. For example, if you are using one of the short-range imager configurations (i.e., 5m), then something at 5.1m will appear in the camera as 10cm away (for example). To fix this, you can change the imager configuration for the application in question. There are really two "knobs you can turn" to affect the image quality / non-ambiguous range here: the number of frequencies used (increasing the range) and the number of exposure times (increasing the measurement detail). So, for example, the short range imager is configured as single frequency/dual exposure encoded as "Type": "under5m_moderate",
in the Imager
section of the JSON. The image you reference above, was set to dual frequency / dual exposure which gives us a 30 m effective range. This gets encoded as: "Type": "upto30m_moderate",
. In general, for the applications I work on, this is my "goto" imager configuration. To see what you have available to you on your sensor, you can run the o3d3xx-imager-types
command line tool. You will see a list enumerating things like:
under5m_low
under5m_moderate
under5m_high
upto30m_low
upto30m_moderate
upto30m_high
morethan30m_low
morethan30m_moderate
morethan30m_high
under5m_high
Basically: "under5m" is single frequency, "upto30m" is dual frequency, "morethan30m" is triple frequency and "low" is single exposure, "moderate" is double exposure, and "high" is triple exposure. For single and double exposure you can also tweak the exposure times in the JSON. You should also note that tweaking these exposure times will also affect the frame rate (obvious, but worth pointing out). There is more info on this in the "Operations Manual" available in our doc
directory.
In terms of the spatial and temporal filters (which won't help you with the aberrations you are seeing in your image here, but I'll speak to them anyway), are also configured in the JSON. In the Imager
section of the JSON for the application in question, you will see: "SpatialFilterType": "0",
and "TemporalFilterType": "0",
. Then to set parameters for each, you will see:
"SpatialFilter":
{
"Type": "0",
"TypeStr_": "Off"
},
"TemporalFilter":
{
"Type": "0",
"TypeStr_": "Off"
}
camera.hpp
defines the constants for setting the filter types:
/**
* Spatial filter types
*/
enum class spatial_filter : int
{ OFF = 0, MEDIAN_FILTER = 1, MEAN_FILTER = 2, BILATERAL_FILTER = 3 };
/**
* Temporal filter types
*/
enum class temporal_filter : int
{ OFF = 0, TEMPORAL_MEAN_FILTER = 1, ADAPTIVE_EXPONENTIAL_FILTER = 2 };
So, for example, to turn on spatial median filtering, you would set SpatialFilterType
to 1
in the JSON. Then for each filter type there are parameters to tune. For example, on a temporal filter you can tweak the number of images the statistics get computed across or for a spatial filter you can set a mask size. For the spatial filter mask sizes, again, symbolic constants are used and can be see in spatial_filter_config.h
:
/**
* Mask sizes
*/
enum class mask_size : int { _3x3 = 0, _5x5 = 1 };
The easiest way to do this is to use o3d3xx-dump
and set the filter type that you want. Import that in via o3d3xx-config
. Then do another o3d3xx-dump
. The tuneable parameters will then appear in the TemporalFilter
and SpatialFilter
sections of the JSON. Then tune those and once again o3d3xx-config
them into the sensor.
Finally, another resource would be to look at the unit tests if you wanted to set these things programmatically: For example spatial and temporal
We are always looking for people to help us with documentation!
Thank you! This was exactly the problem and changing to upto30m_moderate
fixed it.
Is there a way to modify the existing App Imager Types as a one line command? When I tried echoing to o3d3xx-config
, a new application was created (with odd default params). I had to upload the full file configure to get the changes I wanted.
What default application params do you use with upto30m_moderate
? Could you post an o3d3xx-dump
of your app config?
Also, is the code available for the default app and 100k.o3d3xxapp
? I am hoping to write my own apps and would like a good starting point.
Happy to help contribute to the documentation, where is the best place to make additions?
WRT the "one-liners", that will be implemented in the 0.2.0
release. And, I'll keeping pushing out my dev branch to github as feature-based commits are made if you want access to this stuff pre-release.
I'm not sure what you mean by "the code" for the default app and the 100k app. In terms of the camera, an "app" is just the configuration which is fully exportable via o3d3xx-dump
. You can use one of those as a starting point and customize as necessary. Also, WRT 100k, this is currently not fully supported by IFM (yet) so what you see coming out from o3d3xx-dump
for the 100k app may look incorrect or misleading. A future firmware release will make the 100k app "official" and by association the dumps will look more reasonable.
I'll get back to you on documentation. Thanks for your offer to help.
@bigcmos I've pushed out a change to dev-0.2.0
that should provide better support for piping partial JSON configurations through o3d3xx-config
. So, to speak directly to your comment above, to specify just the imager type for a particular app while retaining its other settings:
$ echo '{"o3d3xx":{"Apps":[{"Index":"1", "Imager":{"Type":"upto30m_moderate"}}]}}' | o3d3xx-config
Please note that to modify a particular application, you need to specify the index. When no index is specified, the code assumes you want to create a new application, which is what I think you had experienced above.
@tpanzarella Great. I'll test out the one liner tomorrow.
WRT the apps, I'm trying to figure out how to modify the configuration for an app. I would like to be able to create an application that can produce different densities of point clouds. What is the best way to get started modifying an app to change the density?
I read on your utils page that o3d3xx-ifm-export
can be used to export data (e.g., applications) from the camera into a format that is compatible with IFM Vision Assistant. I exported the app on linux and copied it over to a PC running IFM Vision Assistant, but couldn't figure out how to open the application.
On your two points above:
libo3d3xx
please be aware of: https://github.com/lovepark/libo3d3xx/issues/7 Three options: 1) recompile PCL with -std=c++11
and recompile libo3d3xx
against that and link your application using the C++11 aware PCL or 2) use the ROS interface or 3) Use some other out-of-process messaging and serialization (e.g., zmq + protobuf). I highly recommend option 2.The diff won't be telling at this point because the 100k is not supported officially by IFM and will appear no different that a 23k image for this firmware level. This will change in an upcoming firmware.
AFAIK, what you want to do with the point cloud density inside the camera using facilities offered by the XMLRPC interface, it is not possible. I may be wrong but at this point this would be better answered by @graugans who is at IFM.
Thanks I'll reach out to him.
OK. It would be good to keep that conversation running here on github (attached to this issue) so that we have it cataloged in our running "knowledge base" for the users of the O3D and the open-source software interfaces. Thanks!
I've started a new bug to track this conversation: https://github.com/lovepark/libo3d3xx/issues/24
Should there be expected noise near the edges of the device view frustum, especially near the device? Do I need to write my own de-noise function or is there a routine I can run? See below image with the cone of noise at the top coming out of the device.
$ o3d3xx-dump | grep IFM_Software "IFM_Software": "1.1.288",