Open sjelavic opened 8 years ago
The buffer returned from the library is actually a YUV420 image. The python code only uses the Y plane to get a grayscale image, but you could modify it to use the color data in the U and V planes as well.
Once you solo video acquire
, the HDMI input is just a standard V4L2 device at /dev/video0, albeit without the ability to set modes or resolutions, since it gets whatever the GoPro is sending.
The solocam
library is just a V4L2 wrapper intended as a more convenient and efficient way of getting grayscale images. Using the default gstreamer pipeline, we were converting from YUV -> RGB -> grayscale, which is inefficient when the hardware effectively already provides grayscale in the form of the Y plane in YUV. Here's the source to libsolocam.so if you want to see what it's doing and/or modify it.
Thanks for the input @kevinmehall . That should work for me.
@kevinmehall The source for libsolocam.so should be in this repo, otherwise it's going to get lost in this issue
Have a few more questions Kevin, Which cross compiler do you recommend to compile the library. I have cygwin on one machine and umbutu on another one.
Also how do you modify solocam.c Do you just change the pixelformat to V4L2_PIX_FMT_RGB or something like that?
Thanks, Stan
On Thu, Jan 14, 2016 at 1:33 PM, Kevin Mehall notifications@github.com wrote:
The buffer returned from the library https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_3drobotics_solodevguide_blob_115fdafb27cafd2640dfc23b7e0f6806d3efdbbc_examples_opencv_SoloCamera.py-23L66&d=CwMCaQ&c=q3cDpHe1hF8lXU5EFjNM_A&r=lodCmIy73lyMbusd666GT7XvqCf4DruKKkyDYUQ9wdfPaSzjOOLp7uVsB6VlQ8O5&m=dFT-YERAyXyCQ2wVvRQM90_r5qWHeAaDUU3qqmM2PD8&s=Yw71XD1DWv-uzPkdS9freiTWnKzA6X3o0jLub39cTGU&e= is actually a YUV420 https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_YUV-23Y.27UV420p-5F.28and-5FY.27V12-5For-5FYV12.29-5Fto-5FRGB888-5Fconversion&d=CwMCaQ&c=q3cDpHe1hF8lXU5EFjNM_A&r=lodCmIy73lyMbusd666GT7XvqCf4DruKKkyDYUQ9wdfPaSzjOOLp7uVsB6VlQ8O5&m=dFT-YERAyXyCQ2wVvRQM90_r5qWHeAaDUU3qqmM2PD8&s=nLYHzHYXQz0G5wi6MMFGZtEaKmnr_qjh8WZ9bq56O7U&e= image. The python code only uses the Y plane to get a grayscale image, but you could modify it to use the color data in the U and V planes as well.
Once you solo video acquire, the HDMI input is just a standard V4L2 device at /dev/video0, albeit without the ability to set modes or resolutions, since it gets whatever the GoPro is sending.
The solocam library is just a V4L2 wrapper intended as a more convenient and efficient way of getting grayscale images. Using the default gstreamer pipeline https://urldefense.proofpoint.com/v2/url?u=http-3A__dev.3dr.com_concept-2Dvideo.html&d=CwMCaQ&c=q3cDpHe1hF8lXU5EFjNM_A&r=lodCmIy73lyMbusd666GT7XvqCf4DruKKkyDYUQ9wdfPaSzjOOLp7uVsB6VlQ8O5&m=dFT-YERAyXyCQ2wVvRQM90_r5qWHeAaDUU3qqmM2PD8&s=pi9iAv7nh-ANp7x3J9yPhZis7d3van4CoO1VbjJZR8c&e=, we were converting from YUV -> RGB -> grayscale, which is inefficient when the hardware effectively already provides grayscale in the form of the Y plane in YUV. Here's the source to libsolocam.so https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_kevinmehall_17f2f1ad113f4b4349dc&d=CwMCaQ&c=q3cDpHe1hF8lXU5EFjNM_A&r=lodCmIy73lyMbusd666GT7XvqCf4DruKKkyDYUQ9wdfPaSzjOOLp7uVsB6VlQ8O5&m=dFT-YERAyXyCQ2wVvRQM90_r5qWHeAaDUU3qqmM2PD8&s=NTfkOzO8GhPuO8QwfkQbAocm2x8y1PFtiGUvk7EnrUU&e= if you want to see what it's doing and/or modify it.
— Reply to this email directly or view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_3drobotics_solodevguide_issues_255-23issuecomment-2D171732552&d=CwMCaQ&c=q3cDpHe1hF8lXU5EFjNM_A&r=lodCmIy73lyMbusd666GT7XvqCf4DruKKkyDYUQ9wdfPaSzjOOLp7uVsB6VlQ8O5&m=dFT-YERAyXyCQ2wVvRQM90_r5qWHeAaDUU3qqmM2PD8&s=Za4xvNCJdjX3ykTWAHsrpLl2zV2le_cb_6pIMmcHwbs&e= .
Hi @kevinmehall - thanks for your work on the library. I'm keen to get colour images as well, so would you be able to share some instructions for building it too?
The image from libsolocam.so is actually a color image in YUV420 format, despite the function name. This is what comes off the HDMI port, and is the only mode you'll get (at least with the GoPro HDMI input I've tested). It just happens that a YUV420 image begins with the Y plane, which is a grayscale version of the image, followed by color data in the U and V planes. The original application was some computer vision code that needed grayscale, so this was convenient because it could just ignore the color planes when it converts the Y plane to a numpy array.
You can't change the pixel format at the V4L2 level -- you have to convert it yourself or use a library like gstreamer to do it for you.
Compiling C code for Solo is kind of a big missing piece in this dev guide, in part because the toolchain that 3DR developers have handy comes from the Yocto-based tree we use to build the entire SoloLink system internally. Let me try and see if there's an easy solution for this case, though...
This builds it on my Ubuntu 15.10 system, but I haven't yet tested the resulting binary on a Solo to make sure it links properly:
sudo apt-get install gcc-arm-linux-gnueabi
arm-linux-gnueabi-gcc solocam.c -O3 -lm -shared -o libsolocam.so
+1
Thanks @kevinmehall I will try the build. In the meantime I tried using the YUV420 buffer but had a hard time getting the color frame in python. Ended up using the following for color frame: os.system("modprobe mxc_v4l2_capture") os.system("gst-launch -e tvsrc device=/dev/video0 num-buffers=1 ! mfw_ipucsc ! jpegenc ! filesink location=cam.jpg") frame = cv2.imread('cam.jpg',1)
This worked but it is pretty slow. So maybe I should have another look at the buffer returned from libsolocam.so and see if I can compose the color frame from it and then if that does not work modify and rebuild the lib.
Anyone had any luck compiling solocam.c
to run on the solo?
I could compile solocam but I still can't run SoloCamera example because of wrong ELF class: ELFCLASS32
error.
Cool, how did you compile it? Did you compile it on the solo itself?
I managed to get it to compile on the solo, next step: try it out.
First install GCC
smart channel --remove cortexa9hf_vfp_neon -y
smart channel --remove cortexa9hf_vfp_neon_mx6 -y
smart channel --add cortexa9hf_vfp_neon type=rpm-md baseurl=http://downloads.yoctoproject.org/releases/yocto/yocto-1.5.1/rpm/cortexa9hf_vfp_neon/ -y
smart channel --add cortexa9hf_vfp_neon_mx6 type=rpm-md baseurl=http://downloads.yoctoproject.org/releases/yocto/yocto-1.5.1/rpm/cortexa9hf_vfp_neon_mx6/ -y
smart update
smart install gcc gcc-symlinks libc6-dev gcc-dev binutils python-dev -y
gcc -v
Then build!
gcc --std=c99 -c -Wall -Werror -fpic solocam.c
gcc -shared -o libsolocam.so solocam.o
I've been able to modify, compile and use new libsolocam.so
builds no problem.
@alexblack Very cool. If you're so inclined you might want to create and point to instructions on how to do this ... perhaps in the wiki.
hey @hamishwillee I posted instructions just above!
@alexblack I am blind. Thank you!
Anyone managed to capture video at 1920x1080? See https://github.com/3drobotics/solodevguide/issues/313
I think I see how to get color images! I haven't tried it. But this line:
https://github.com/3drobotics/solodevguide/blob/master/examples/opencv/SoloCamera.py#L67
image = np.ctypeslib.as_array(bufc.data,shape = (self.height*self.width,)).reshape(self.height, self.width)
Is what converts the color YUV420 image into grayscale. In YUV420, the Y channel comes before the others, and its luminance.
That line above creates an array backed by the data starting at the start of the buffer, up to and including the first w*h values, which is all the Y pixels.
If you instead made the array include the U and V channels, then you'd have color, and you could ask opencv to convert it to something sensible perhaps with:
img = cv2.cvtColor(img, COLOR_YUV420p2RGB)
(Alternatively, you could modify and recompile libsolocam
to request pixels in RGB, and then modify that as_array line to make a 3 channel array)
This is interesting. Will give it a try. I have not used the drone for a while and will be getting back into it.
I tried it and it works perfectly:
bufp = BUF_P()
check(solocam.solocam_read_frame(ctx, byref(bufp)))
bufc = bufp.contents
imageYUV = np.ctypeslib.as_array(bufc.data,shape = (bufc.used,)).reshape(int(height*1.5), width)
color = cv2.cvtColor(imageYUV,cv2.COLOR_YUV420P2RGB)
In solo Onboard it is really slow (less than 2 frames per second), i will try something quicker maybe reducing resolution.
Nice work! I'd suggest compiling a new libsolocam
(see above) and change the format to RGB right off the bat, probably does it in hardware and faster..?
can anyone share their compiled libsolocam for rgb?
@alexblack @David-Proconsi Did you have success in increasing the fps with a recompiled libsolocam? Thanks.
@tasarmiento no, I wasn't able to change the FPS or resolution :(
Looks like this lib supports only one video format. Any plan of expanding the capability. I use it with face detection and would like to see the fps figures for other video formats as well, including color.