Open ManuelCostanzo opened 5 years ago
I have try out to install Libfreenect2 on Ubuntu 64Bit. I have used the git repro from Jyurinieko https://github.com/Jyurineko/libfreenect2.git. At the end I have an issue USB bulk transfer. The kinectv2 is running on my PC via ubuntu 20.04 with libfreenect2. I have increase the usb memory and autosuspend was deactivated. At the end I found a hint, that this error can be happen because of en error inside the kernel. I guess, I need a little bit help to understand, how to investigate this. @Jyurineko, could you send me several information about your system, please. Perhaps, I use a wrong OS(e.g. older OS). I a second try I have used buster 32 bit. I get the same issue.
Hi, @busybeaver42. The whole setup of mine is: kinectv2 connect the raspberrypi4 via original rasp-OS(32bit i think, i don't know the official OS has 64bit or not, with libfreenect2 runing on raspberrypi4. I was recommended at that time by other guys that you should directly connect a monitor, not using VNC. it takes me a lot of time to solve and also failed. so i just use one monitor to display.(it will solve many problems). If the related libraries are installed correctly, you can see the same streams result(the four separete video:rbg,ir, depth, and 4th video.) as the same as libfreenect2 running on the PC, but still my test is 5fps top, and PC can run over 30fps. the screenshot which i just use VNC to see the display for my school practice, and as i said above, real monitor has connected directly with Pi4.
Hi Jyurineko, thx a lot for your answer. Your picture looks very good :-) 5 fps is for my project fast enough.
For my test I have used RPI4 Model B from 2018 ...
with ...
after starting Protonect I get following result:
Could you give me some more information's about your running system, please.
Im very open for hints or idears to bring the system to fly.
P.S.: and yes, I have used my old monitor and this was very helpfull ;-) thx for your hint.
@busybeaver42 because i'm just back to my homeland and isolating in China, and I do forget to bring the micro-hdmi :( i can't see the display.
I remember that close the "use_opengl" option in cmake flags, because the they use opengl3.1 to visualize the images. and some functions is different between gl and gles. we should use gles function line.
usb driver and others, i think i just followed the lifbreenect2 official tutorial, the linux installation methods.
PS: if it is possible, i upload one word file, named as "libfreenect2 memo". you can check it. that is what i tried step one step. BTW, something written in chinese, maybe you google translate in Deutsch or english.(hier ist Address: https://docs.google.com/document/d/1cPu28kZ6rHqBMVZyZsxTE4bvEcnLu291cVj471aV5co/edit?usp=sharing
PPS: after i return home, if you still have question, i can provide you other informations or email the whole folder to you.
Thx very mutch for your support. I have read above the flags inside the picture and have try it with the same flags. No success up to now. I have try to open your link to the docs. , but it is an empty link. Nothing is there. I will wait on your return and hope we can found a way to let the system run :-)
maybe you self copy the address and paste in browser? I have shared with google doc. my email: ilyetn@gmail.com ( i'm not sure this is legal for post the email address. but I think it is convenient to communicate
thx for your hint. Now I could get your memos.
Great news, it works !!! My root issue was the USB 3.0 cable. I have more than one Kinnectv2(kv2). I have tested my second (hacked) kv2. The cable from the hacked kv2 is 1m long(USB 3.0 A-B). It`s direct connected between kv2 and RPI4 and The 12V(2.67A) power supply is direct connected to kv2. The not hacked kv2 has a 2m long cable between kv2 and KINECT power adapter and additional an USB 3.0 connection cable(1m) is between KINECT Power supply adapter and RPI4. That are in total 3m. I use a power supply with 3A for the RPI4. I would like to know why my laptop hasn't trouble with 3m long cable and the RPI4 has this trouble ???
I have done tests with several OS: Debian (Buster) 32 bit - works Debian (Buster) 64 bit - works - OS source: https://downloads.raspberrypi.org/raspios_arm64/images/ (2021-05-07) Ubuntu Desktop (21.04) 64bit - works
here a short setup description for Debian (Buster) 32/64-bit:
sudo nano /boot/config.txt search and modify the GPU-memory: gpu_mem=64
sudo nano /boot/cmdline.txt and modify to: console=serial0,115200 console=tty1 root=PARTUUID=e0e69b19-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles usbcore.usbfs_memory_mb=64 usbcore.autosuspend=-1
git clone https://github.com/Jyurineko/libfreenect2 sudo apt install libudev-dev sudo apt install mesa-utils sudo apt update sudo apt install libgles2-mesa-dev Now similar to Linux install description from: https://github.com/OpenKinect/libfreenect2 ... sudo apt-get install libusb-1.0-0-dev sudo apt-get install libturbojpeg0-dev sudo apt-get install libglfw3-dev sudo apt-get install libopenni2-dev mkdir build && cd build cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_CUDA=OFF -DENABLE_OPENCL=OFF -DENABLE_OPENGL_AS_ES31=ON -DENABLE_CXX11=ON -DENABLE_VAAPI=OFF make -j4 sudo make install sudo ldconfig sudo cp ../platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/ ./libfreenect2/build/bin/Protonect gl
Hope that will help :-)
if i read correctly,
1 kinect ,1m usb3.0 cable ,1 usb3 port of Pi4.
1 kinect ,2m usb3.0 cable ,1 usb3 port of Pi4.
no external usb-hub.
tom's HARDWARE tests that Pi4B's USB3 port has maximum theoretical bandwidth: 625MBps. then author plug Mushkin 120GB extern SSD in one of USB3 ports. the R/W throughputs are 363MBps and 323MBps. I'm not sure two USB3 ports's bandwidths will be diverted or not. But it's worth considering(maybe
and i find the issue #320 , it asks multi kinect connection. you can check it, hope this will help:)
@busybeaver42 Hi, firstly congrats it works! :) I want to ask that could i use your short setup description for Debian (Buster) 32/64-bit in my Readme of repository? i will also cite your answer link and reference you work, Thank you in advance
Hi Jyurineko, first thx you very much for your support and of course I give you the right to use my description for your Readme in your repository.
During my investigation I have connect one single kinect v2 in any setup, only. I didn't connect two kinect v2 in parallel. First test was with 3m cable(in total) and one kv2 - that does not work. Second setup with 1m cable and one kv2 - that does work. Today, I have order an USB 3.0 A-B cable (0,6m) and will try to connect two kv2 with RPI4 ;-)
Hi busybeaver42 Thanks a lot for permission :)
USB-IF says one normal usb3.0 cable length maximum 3 meter, if the length longer than this limitation, the strength of signal will decrease. Maybe cable is too long to hold Kinect v2's data throughput? 0.6m cable should work :) Viel Erfolg!
Today, I have tested 2 Kv2 at one RPI4. With usbfs_memory_mb = 64 it run unstable. After I set the usbfs_memory_mb from 64 to 128 it was more stable. But in principle it works:
:-)
wow, cool and congrats!! I see the V sign~ ;-) seems like RPI4's hardware level is indeed much better than I thought.
hello @busybeaver42 , @Jyurineko, @tavishm, @CarlosGS, @floe @yomboprime , I want to use Kinect v2 on Raspberry pi 4 model B, but I'm not able to connect my kinectv2 with RPI 4, can u tell me which OS i have to use on RPI4 for kinectv2, which drivers exactly i have to use, and i have to use it on Processing software. so please help in that. Thank you
I used official “Raspberry Pi OS with desktop”,If the 'drivers' you mentioned is OpenGL drivers, last year i tried OpenGL ES 3.1, but some code should be rewrite in shader.
Hello @Jyurineko ,Thanks for your reply. Actually I am using latest ubuntu desktop 64bit version on raspberry Pi 4 model B, but I am getting below error. "[Error] [OpenGLDepthPacketProcessorImpl] GLFW error 65543 GLX: to create context: GLXBadFBConfig [Error] [OpenGLDepthPacketProcessor] Failed to create opengl window I had read above that this happens with you... Right?..... Can you tell me how you solved this error.
Hi, following description could help in this case:
Rasbian 32 Bit OS (should work for 64 Bit OS, too) sudo nano /boot/cmdline.txt add: usbcore.usbfs_memory_mb=128 usbcore.autosuspend=-1
To solve: Error GLFW error 65543 --- GLXBAdFBConfig Install from: git clone https://github.com/Jyurineko/libfreenect2 sudo apt install libudev-dev sudo apt install mesa-utils sudo apt update sudo apt install libgles2-mesa-dev Now similar to Linux install description from: https://github.com/OpenKinect/libfreenect2 ... sudo apt-get install libusb-1.0-0-dev sudo apt-get install libturbojpeg0-dev sudo apt-get install libglfw3-dev sudo apt-get install libopenni2-dev mkdir build && cd build cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_CUDA=OFF -DENABLE_OPENCL=OFF -DENABLE_OPENGL_AS_ES31=ON -DENABLE_CXX11=ON -DENABLE_VAAPI=OFF make -j4 sudo make install sudo ldconfig sudo cp ../platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/ ./libfreenect2/build/bin/Protonect gl
OpenGL install Setup sudo apt-get install -y libgles2-mesa libgles2-mesa-dev xorg-dev sudo apt update sudo apt-get install -y libgtkglext1-dev libqt4-opengl-dev Test sudo apt-get install -y mesa-utils sudo apt-get install libglew-dev sudo apt-get install -y glew-utils sudo apt-get install libgl1-mesa-dri glxgears
RPI OpenGL Activation raspi-config activate openGL
Hint: Rasbian Buster 64 bit https://downloads.raspberrypi.org/raspios_arm64/images/
hello @busybeaver42 ,giving this error when installing below cmd. pi@raspberrypi:~/build $ cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_CUDA=OFF -DENABLE_OPENCL=OFF -DENABLE_OPENGL_AS_ES31=ON -DENABLE_CXX11=ON -DENABLE_VAAPI=OFF CMake Error: The source directory "/home/pi" does not appear to contain CMakeLists.txt. Specify --help for usage, or press the help button on the CMake GUI.
I guess you are inside the wrong folder: startpage search: "does not appear to contain CMakeLists.txt" and I get following link: https://discourse.cmake.org/t/source-directory-does-not-appear-to-contain-cmakelists-txt/3654 cd libfreenect2 mkdir build && cd build cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_CUDA=OFF -DENABLE_OPENCL=OFF -DENABLE_OPENGL_AS_ES31=ON -DENABLE_CXX11=ON -DENABLE_VAAPI=OFF make -j4
Ok... means I have to create build folder inside libfreenect2 folder.... right?
yes, you should create build folder in libfreenect2 folder, then in build folder run the cmake. or you can go back libfreenect2 folder and run cmake, but without ".." , just "cmake -DCMAKE_............. VAAPI=OFF" instead of "cmake .. -DCMAKE_B........VAAPI=OFF"
Yeahhh.... It's worked for me.... Thankyou @busybeaver42 and @Jyurineko
@busybeaver42 @Jyurineko Hello! I saw a lot of similar issues on this thread to what I've experienced and I'm wondering if anyone ever figured out the slow frame rate with the Raspberry Pi 4. I'm using a Pi4 B with a Kinect One but still only getting about 2 FPS. I see I'm losing a lot of packets, each [DepthPacketStreamParser] message says about 45 packets were lost. I ran glxinfo | grep OpenGL and confirmed my OpenGL driver is 3.1. If anyone could help me, I would really appreciate it! I'm trying to live stream the depth camera, so the more FPS the better!
@gregoryosha hello, in my test I only get about 5 FPS top, and this "high level" FPS will keep about 30 seconds.(i guess, maybe shorter), and with longer time running, it will also reduce to 2~3FPS. Yes I have same situation, in my test it will also losing packets. FPS issue, I think Rasp Pi4 hasn't enougn hardware performance. Did you try only one or two stream? only ir or others.
@Jyurineko Thank you for the fast response! It was a little faster when I just pulled the RGB stream, but I wasn't sure how to get only the depth stream data. I'm currently using openCV and the freenect2 library to pull and display the frames.
For the record, a somewhat less processing-intensive alternative to the Kinect v2 might be one of the Intel Realsense depth cams, which do (almost) all depth processing on-board.
Jyurineko is right. Rasp Pi4 perfomance has a limit. To reduce the error rate I would suggest to use a very short USB cable and to increase the performance I would suggest to use a high performance heat sink and try to overclock the Rasp Pi4. To use another HW (e.g. kinect v3) is in any case a solution, but this is a thread for kinect v2.
~/libfreenect2/build/bin
./Protonect -help
Version: 0.2.0
Environment variables: LOGFILE=
use: sudo ./Protonect -norgb another option is to search "norgb" inside the libfreenect2 source and you can disable it there.
Got it, thanks for the help. I think I'm gonna switch to a computer vision approach for now but I might try out the Intel Realsense later.
Hello @busybeaver42 @Jyurineko, this might be a stupid question but is there any way we can get the data from kinect v2 and without processing transfer it to PC and process it there. I am trying to use kinect v2 on a mobile robot with rpi and I don't have any issues in doing all the processing on the PC.
@brijraj005 I assume that you want to fetch the video stream from RPI wirelessly. Could use RPI wireless video transmitter.
@brijraj005 there are some plugins for Gstreamer that allow RGBD image encoding and you can send over UDP to a sink that's another computer ... it's what I intend to do although I wouldn't expect to get any better framerates ... potentially worse!
However, I am having issues getting the Kinectv2 to talk at all
Since I'm running Ubuntu Server, I used
xvfb: 99 &
export Display=:99
./bin/Protonect
but I get errors about [Error] [usb::TransferPool] failed to submit transfer: LIBUSB_ERROR_IO Input/Output Error libusb: error [submit_iso_transfer] submiturb failed, errno=11
If I run xvfb as before, but then run
./bin/Protonect -nodepth
then I can grab the RGB images no problem
If I instead run
./bin/Protonect -norgb
then I encounter the same error, indicating that it's an issue with grabbing the depth images
I've tried increasing the usbcore.usbfs_memory_mb=128
to 128 and then to 256 to see, but no difference.
Any suggestions?
@TheRealBeef I am not sure what's the issue is. I am also running Ubuntu server 20.04 on rpi as well as on my laptop and I followed the same procedure as discussed in this thread and it's working fine on laptop but on rpi it's giving maybe 1 or 2 frames per second.
Hm, at least it's working for you!
Unfortunately, the Kinect v2 does take quite a lot of processing power as it doesn't do anything onboard and I don't know of any convenient way to stream the raw input on the USB to another device, although maybe something exists. I'm not sure how much bandwidth is actually required, but I imagine it's not small, especially in a mobile robot setting where a wired ethernet connection isn't an option.
Hello @brijraj005, first, I doesn't know stupid questions ;-) Several hardware do a part of the processing internal. Here an information what the Kinect One is doing:
The data traffic for the depth information is much higher than for the RGB data. That mean that the probability for data error is increasing a lot. I do not know much about your setup, but i guess you use an old standard Kinect one with the original cable. This cable is very long and if the cable is long and old you will get more data errors. I have Hacked my Kinect one and use a very short 30cm long cable. A cable like this: https://www.amazon.com/StarTech-com-30cm-SuperSpeed-USB-Cable/dp/B004395680?th=1 That make the whole system more stable. https://www.youtube.com/watch?v=RdIugGyB38U (intrudiction how to hack a kinect one, soory in german, bu you will see what is to do.) I have used the kinect One with power supply adapter 12V / 2A.
I have try in the past to send data (all Kinect One image data) from RPI to another PC. But the data rate was so high, that the TCP socket protocol show me his limits. One solution for me was to split the data in several big data blocks and build it together on the other side. That works not fast enough for me. Than I have try out to program it with UDP. That works not stable enough for me. My suggestion is to thing about which data the another side need and send only this data. And if you need the whole data, than I would try out to use image compression ( https://subscription.packtpub.com/book/data/9781788474443/1/ch01lvl1sec05/saving-images-using-lossy-and-lossless-compression) . You could lost information, but perhaps it is fine for your project. I would like to know more about your Robot project. What are your plan. What shall your robot do?
@busybeaver42 To be honest with you, so far I've seen that TCP is terrible for the image stream from a camera, whether RGBD or just RGB. DDS is just as bad if not even worse (as in ROS2!).
I have a standard 120deg webcam hooked to the RPI and was having trouble to stream at 640x480 at anything >10FPS, even with the webcam performing compression onboard and the RPI just forwarding the stream. There was also a significant delay. This was using both TCP and FastDDS with ROS and ROS2.
When I changed to using Gstreamer to send the data over UDP now I can stream 4k @ 30FPS no issues, and with almost no delay, which works great for ORBSLAM. I use GSCAM2 to generate the ROS2 messages on the receiving PC.
Since there are RGBD plugins for Gstreamer, once I finally manage to get Kinect v2 working on the RPI (no idea why I cannot but you guys can - I have also tried shorter cables), this is how I still send the data over UDP. I imagine it's too heavy computation for the RPI to get any reasonably good framerate, but it's a good stand-in until acquiring something more powerful like a Jetson who can do the processing onboard no problems.
@busybeaver42, @TheRealBeef thankyou for your quick replies. I am using Kinect V2, i have already connected power and all, its working. I am trying to implement RGBD SLAM.
@TheRealBeef i am trying Gstreamer, do we need to have ROS installed for that? I am not sure how it works.
Please refer screenshot. I tried running pipeline on RPi but its showing WARNING: erroneous pipeline: no element "freenect2src"
. I don't know what i am doing here, can you suggest what i should do.
I got it working but as soon as I try to send data over udp it freezes.
Can you tell me how to install the python interface, freenect2, on raspberry pi 4? It would be great if you could tell me about raspberry pi os first..
@changhyun98 If you're comfortable with the linux command line, shouldn't be very difficult ( it'll be mostly similar to debian ). Python is pre-installed and use this to install freenect 2: https://rjw57.github.io/freenect2-python/ There are many minor things to note, use the entirety of this thread for guidance.
In case you're new to linux, I would recommend taking some time to get comfortable.
@tavishm thank you for answer I know i need to install libfreenect2 before installing freenect2, but on Raspberry Pi, Can I install it from Jyurineko/libfreenect2-Raspberry-Pi4-support?
Also, I don't think there is an installation guide on the freenect2 homepage, so what command do you use to install it? $ pip install --user freenect2 Can I just use the above command?
thk
Im so glad to have finally found this conversation. You guys are more technically skilled than I but, I am following along. I have a friend who is a "paranormal investigator". While I am a skeptic, I am helping her build some "detection" equipment. It will be a mobile station, and one of the things Im trying to do is track a dot array exactly like the Kinect works in Infrared.
I am using a pi 5 though, so maybe some of the hardware limitations may not be such a problem? I would like to be recording video though, I dont need the RGB camera, I only have interest in the infrared array and camera.
I tried to read though (skimmed a bit) Seems maybe we don't have a definitive yes on this?
Hello, i have a question... I need to know if it is possible to use Kinect ONE on Raspberry Pi. I know that kinect runs on USB 3.0 but, i think that modifying some parameters, like for example "ir_pkts_per_xfer", it can be work. Before buy one i want to know if it is possible.
Thank you!