Open Wesley-Lin opened 9 years ago
One thing that I noticed is that the first 15 frames are green while the camera and sensors initialize .
If you are saving the first frame that is incoming I would suggest you skip a few and then capture and see if rgb is still green
VLC only understands how to render common formats. Since the camera incorrectly identifies the first depth format as a 16 bit color format it converts that format to RGB, and you get an ugly green video. Workarounds include patching VLC to treat the realsense formats as grey, or changing the kernel patch to identify one of it's available formats as grey. I haven't looked into how to do the first, but the second I did during development of the patch. I didn't keep it because linux can only select one combination of bit depth and grey per camera, but the camera outputs two 8bit, and two 16bit formats so only one would be usable. I gave them special names so that they would both be accessible. You probably only need one of them so it's probably reasonable to use the kernel workaround.
So for example you could change this.
to
to get a depth format to identify as 16bit grey
Similarly you could change a infrared format to 8bit grey
to
This doesn't change anything about the raw data it just gives VLC a clue about how to render it. I'm not sure if VLC will give you a format selection option, or if you will need to set the video mode before opening VLC. You can set the video mode with the command line app v4l2-ctl. If you install my utility in the udev branch VLC will be able to control the depth specific controls of the camera.
As far as resolution the depth camera has a maximum resolution of 640x480. This is better than the Kinect2, and I think the best currently on the market in a consumer device. The color camera is basically an off the shelf consumer color camera and has a bunch of selectable modes. 1080p cameras are cheap so that is what they gave you. You can reduce the color resolution to match if you wish. It's important to note though that I think the only thing special about it is that they made sure it has time sync with the depth camera and optimized optics choices for computer vision applications. It is on the same device, but is not at exactly the same perspective, and a pixel position on the depth camera probably won't align with a pixel at the same position on the color camera. Finding the color of a given pixel from the depth camera in order to create a colorized point cloud requires a computer vision technique called "registration". I haven't created any tools to do this yet.
Hi teknotus, I'm sorry for the delay in response Thanks for your suggestion, but I still have other issues, thus, I describe my situation in detail, any pointers to them will be highly appreciated. I have the RealSense R200 device, and I want to use it on Linux (ubuntu 14.04). There are 3 video devices while R200 just be plugged into desktop, and they may be IR, depth, RGB image I guess. Therefore, I survey your source code and v4l2, then I write a simple example to read RGB data. Meanwhile, I also apply kernel patch. Here is my example link https://drive.google.com/file/d/0B5SW75OaeiTNRXAzeDdvdnBBa2s/view?usp=sharing
I encounter some issue as blew
Thank you so much your support Regards.
I have the RealSense R200 device, and I want to use it on Linux (ubuntu 14.04).
My patch is for the F200. I found a patch that looks almost like it for the R200 in code released directly from Intel. Try working with that instead.
There are 3 video devices while R200 just be plugged into desktop, and they may be IR, depth, RGB image I guess.
From a USB header someone sent me it looks like the three video feeds are "Left / Right", Depth, and RGB.
I've sometimes had issues with the camera stopping working from some comments in the source where I found that patch it seems to be an issue with an ambiguity in the USB video standard, and they went with a solution that worked on Windows, but not Linux. They'll probably fix it in a firmware update. I started to write a tool to do a camera reset via software, but haven't finished it. The reset is easy it's reliably finding the device file for the USB port the camera is connected to that is a bit tricky.
In the rstools branch is code for finding cameras. It has most of what is needed for finding the USB port to reset. Also another example of producing a live video feed.
https://github.com/teknotus/depthview/tree/rstools
The getControl is for reading the current value of various controls on the camera. It's for setting the slider positions in the app. I don't know the controls for the R200 yet so I couldn't tell you how to use them anyways. The controls on the F200 are very useful for getting good data optimized for different conditions. For example at the default settings you can't get depth super close to the camera because the laser projector washes out the image, but if you lower the brightness of the laser then you can.
Hi Teknotus, Thanks for your reply, then I summary the suggestions, if I'm wrong, please correct me.
I've followed the guidance from this link to reset the camera via software before. http://askubuntu.com/questions/645/how-do-you-reset-a-usb-device-from-the-command-line
Fully automating the process would take the extra step of finding the /dev/bus/usb/002/003 or whatever it happens to be connected to via software. The find_cameras.c has most of the code to find it, but needs some more work.
So I added the second line here to find_cameras.c, and I think that may be printing out the device node you need now. Just need to figure out how to take that string, and combine it with the code from the link above to reset the camera.
printf("usb syspath: %s\n", udev_device_get_syspath(camera_usb_device));
printf("usb device file: %s\n", udev_device_get_devnode(camera_usb_device));
So the video4Linux2 utility can list all the formats a camera supports through Linux. The --help-all option gives the full list of options. More than the man page on my system. Basically... v4l2-ctl --list-formats But you will need to tell it which camera to list the formats for since the R200 shows up as three cameras.
From the patch I would expect to see the following 4 formats that aren't standard for Linux, and possibly some others that Linux didn't need a patch to support.
Y8I Greyscale 8-bit Left/Right interleaved Y12I 12-bit Left/Right interleaved Z16 Depth data 16-bit RW10 Raw data 10-bit
I expect the Y8I, and Z16 would be the two easiest to display since they align nicely. Just treat them as 8 or 16 bit grayscale images, and you'll probably have something relatively close to useful.
You shouldn't need to getControl to get useful data from the camera. The default settings should be good for many applications, but there are probably useful tweaks available. Possibly something from this list. https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?member_functions_r200_device_pxccapture.html
Oh I think Y16, and GREY are also format options.
Hi Teknotus,
According to the converting depth image to gray level, why do you know "depth = int(depth/31.25 + 0.5); // convert to mm", could you please tell me where you get the information ? Thanks Regards
I believe @BlazingForests figured it out for his ROS module. https://github.com/BlazingForests/realsense_camera
Unfortunately it's a number that changes. It seems to be temperature dependent. I made a tool to try to read the temperature, and other data from the camera which is in rstools. I haven't built a tool to do the calibration based on those numbers though. The temperature stabilizes after a while so you could take some measurements, and get a regression fit to find the constant that you can use as long as the camera is warm. There is a good chance that the interface for collecting this information has changed with the R200 so it might not work at all. It's also possible that the R200 takes the load off of the drivers, and delivers the data in something ready to use like millimeters which would make the job a lot easier.
Thanks @teknotus @Wesley-Lin
I don't have to figure out the calibration of depth and temperature.
But i can update code to add @teknotus 's get temperature code, show the debug info temperature and center depth value at the same time.
thx DD
Hi Teknotus, Please forgive me to many questions to ask you. According to cameradatafeed.cpp, in createImages() function, you divide to two type formats to show. And I have questions to "INRI" case. First, you get the depth data and divide to two components, that is depthpix_cv[0] = high; depthpix_cv[1] = low; it is different from color image you use. it is colorpix_cv[1] = high; colorpix_cv[0] = low; Is the depth data little-endian ? why do you know it need to switch ?
Second, the definition of depth is 16 bits and 1 channel, however, you use 2 channels to assign value,
that is, depth_cv.atcv::Vec2b(j,i) = depthpix_cv. my question is why do you assign value by this way. is it the equivalent to depth_cv.at
Finally, I want to compare r200's depth format with f200's. I find the r200's fomat is "Z16_2_1" and it s description is "16 bits, 2 pixels packed into 1 32 bit word", do you know the f200's depth format ? are they the same ? If they are, maybe I can follow your example to show depth image.
Thank you so much Regards
First, you get the depth data and divide to two components, that is depthpix_cv[0] = high; depthpix_cv[1] = low; it is different from color image you use. it is colorpix_cv[1] = high; colorpix_cv[0] = low;
I that might be a bug that got fixed in the remote_control branch. https://github.com/teknotus/depthview/tree/remote_control
Is the depth data little-endian ? why do you know it need to switch ?
Intel almost exclusively uses little endian. I needed the low, and high bytes separate for something I was doing in an older version of the code base for a feature I removed.
Second, the definition of depth is 16 bits and 1 channel, however, you use 2 channels to assign value, that is, depth_cv.atcv::Vec2b(j,i) = depthpix_cv. my question is why do you assign value by this way. is it the equivalent to depth_cv.at(j,i) = depth ?
I think that is specifying the X and Y location for a single channel. "depth_cv" is defined as 16 bit unsigned pixels so I don't think it would compile if I tried to assign two channels. Mat depth_cv(height,width, CV_16U);
Finally, I want to compare r200's depth format with f200's. I find the r200's fomat is "Z16_2_1" and it s description is "16 bits, 2 pixels packed into 1 32 bit word", do you know the f200's depth format ?
I think Linux will give you a byte buffer, and so the fact that it's defined as two pixels in 32 bits of data may not matter if you just cast the pointer to 16 bits. It might do something weird like indicating that the width is half of the real number of pixels.
Since most display systems can only handle 8 bits per color I keep the 16 bit version for point clouds, but drop down to 8 bits for direct display. depth_cv.convertTo(depth_cv_8,CV_8U,1.0/256.0);
I'm not totally sure about the F200 depth format. It actually has at least two distinct depth formats. I'm betting one is a 16 bit format that has a linear transformation into real world units like meters, and the other is something like the 10 bit raw format mentioned in the R200 kernel patch padded to 16 bits. I don't have many guesses beyond that.
Hi @teknotus @Wesley-Lin
I was update my code to show center Z value and realsense's temperature at the same time.
Thx DD
Hi Teknotus,
I had good news to share with you, that is, I can read depth image and RGB image from Realsense R200. However, I have some image quality issues, could you please give me some advice, and I post the snapshot to your reference.
pic1 In pic1, we can find out there are many noise in it, I mean there are many black portions. Does F200 have the issue ? Can we solve it?
pic2 In pic2, I turn R200 to ceiling, we can see clear waves in it.
pic3 In pic3, we can see the shadow of hand is very clear, How do we eliminate the phenomenon ?
Finally, I really appreciate your kindly help. Thank you again for your support. Regards
In pic1, we can find out there are many noise in it, I mean there are many black portions. Does F200 have the issue ? Can we solve it?
Every 3d camera I've ever worked with has some amount of "holes" in the 3d data. When the camera can't calculate the distance for whatever reason it will return black instead.
Shiny objects are notoriously difficult. The shiny spot may wash out the infrared projector, or if 3d is being generated from disparity images the shiny spot will because of physics be in an inconsistent location in the two images throwing off feature matching.
Pure black is also bad because there just isn't anything to work with. Now since the 3d is extracted from infrared it might not be black as you see it, but be black in the infrared spectrum, and still have the problem.
As I understand it the R200 can generate 3d in two ways. It can use a laser projector like the F200, or it can use two infrared images from different angles, and match features to extract 3d. I'm not sure if it does the feature extraction, and matching in hardware on the device or is designed to just send both video streams to the computer to do the work. Depending on how it is generating the 3d there are different possible lighting problems. If it is using the laser projector sources of infrared light in the environment might cause interference by washing out the laser light. If it is generating the 3d from infrared disparity images lack of infrared light could cause problems. The F200 has multiple special controls on it that are most useful as a means of optimizing for different lighting conditions, and distance between the camera and the scanning target. The USB header indicates that the R200 has several special controls. They have different specifications from the F200 so I cannot easily guess what they are but they are probably for the same types of tuning.
In pic2, I turn R200 to ceiling, we can see clear waves in it.
This might be an integer overflow. Most display systems only handle values between 0-255, but the data from the camera may be as many as 16 bits or 0-65535. Depending on how you convert you may get a sweep from black to white every 255 units of distance.
In pic3, we can see the shadow of hand is very clear, How do we eliminate the phenomenon ?
This is an artifact of how the image is generated. If a part of the image has line of sight from the camera, but not the infrared projector it won't be able to image it. In most application this is worked around by remembering what is there, or stitching pieces together. So for example if the camera isn't moving you could get an image of the background before entering the scene, and fill in this type of hole with the previously recorded background. In many applications you don't actually care about the background, and might simply throw away data further than a given distance from the camera. This might be as simple as.
depth > meter ? 0 : depth;
If on the other hand you are trying to scan things you can't get an image of both the front an back of a head at the same time so you'll have to move the camera around anyways, and splice partial 3d models together. There are strategies that use more cameras to fill in holes, but more cameras is more expensive, and if you are active scanning with an infrared projector there is some interference between them.
The technology isn't perfect yet.
Hi Teknotus,
Thanks for your advice, it sounds issue2 is easier to solve, thus, I will check if there is overflow.
Could you please tell me where I can see the information, I mean where I can see that header or what command I type.
Thank you Regards.
Verbose lsusb will give you most of the USB header stuff. The part that declares special controls for the R200 is this.
VideoControl Interface Descriptor:
bLength 28
bDescriptorType 36
bDescriptorSubtype 6 (EXTENSION_UNIT)
bUnitID 2
guidExtensionCode {342d6818-2cdd-7340-ad23-7214739a074c}
bNumControl 21
bNrPins 1
baSourceID( 0) 1
bControlSize 3
bmControls( 0) 0xfd
bmControls( 1) 0xfb
bmControls( 2) 0x7f
iExtension 0
In that the bNumControl is the number of controls (21), but the bmControls lines are a bitmask indicating which ones are actually enabled. Not all of the bits are ones so I expect there are slightly less than 21 controls. The F200 has bNumControl of 6 but the 4th bit is a zero, and thus there is no control 4. The USB video class driver for Linux knows how to read it though so you just need to know how to use it to probe for available controls. Some of the information about the controls isn't available in the header, but can be probed for such at the range, and default values. Those numbers were enough for me to match the F200 controls as found on the device with the controls defined in the SDK documentation. Hopefully that will at least partially work with the R200, but it looks like there are a lot more controls.
Look at the kernel documentation for UVCIOC_CTRL_QUERY https://www.kernel.org/doc/Documentation/video4linux/uvcvideo.txt
I used this method before I figured out what the controls were, and how to define names for them. The current version of the code has that code removed so look at an old version of. CameraDataFeed::getControls() here https://github.com/teknotus/depthview/blob/d4a58f94a108424c3514087a8b0096290451593a/cameradatafeed.cpp
You might need to run the code as root, or give yourself special permissions on the camera. You need to know more about the nature of the controls on the R200 than I currently do to use the method I switched to in the current code base so this is probably the best method to get at the controls at the moment.
Hi Teknotus,
I change the code and follow your source code, however, it still have clear waves in it. Does it mean the raw image is a little incorrect ?
Thanks Regards
Have you tried swapping the high, and low bytes? That's my best guess at the moment.
Hi Daniel,
yes, the high and low bits don't need to swap, I had tested it, and the depth image don't show wave at all.
Thanks Regards
Hi Daniel,
Do you try to use two or more than two F200s simultaneously ? I reference your camera.c in rstools branch, and copy two of them in separate folders, then lunch it. I can see two RGB images from different R200 simultaneously, however, I cannot do that from depth image. I mean one app can show depth image, the other cannot show.
to show two rgb images
to show two depth images, but fail
And I find out it would stop in below red square.
Do you have any suggestions ? Thanks a lot Regards
I've had inexplicable problems with the depth camera, and epoll that doesn't exist with other cameras. For some reason the epoll will just never get the message that things are ready to go, and it will just wait forever. I would try swapping it with some kind of timer like usleep. depthview uses a timer instead of epoll. I think it checks for a new frame once a millisecond which is more latency, and resources than epoll, but still plenty fast for video.
I haven't had a chance to try two F200 cameras at once.
Hi Daniel,
Now, I try to read the RGB & depth data from single R200 simultaneously, but it still cannot work. The way I use is the same with before I said, two same app in different folder read the video1 & video2. However, If I read RGB first then read depth, depth image would not be getted but RGB still can work; If I read depth first then read RGB, depth would be stopped and RGB cannot be get neither. Does F200 have the same issue ?
Thanks a lot Regards
Hmm. My best guess is that maybe you are reaching the bandwidth limit of USB. You could try lowering the frame rate, and or resolution.
Are there any dmesg errors?
Hi Daniel,
The resolution Depth is 628 x 468, RGB is 640 x 480, and I will check the frame rate. And the dmesg during I try to open Depth and RGB image shows as blew "Failed to set UVC probe control : -32 (exp. 34)" Is it possible that UVC driver has some issue ?
Thanks Regards
Hi teknotus, I try to use VLC application to read raw data from realsense, and I also use the patch to add UVC format. However, the depth image is green and resolution is not the same with RGB image. Could you please teach me how to solve them ?
Thanks Regards