OpenKinect / libfreenect2

Open source drivers for the Kinect for Windows v2 device
2.07k stars 746 forks source link

How to see Kinect2 RGB and Depth as a video device (/dev/videoX) on Ubuntu? #975

Closed Nesh108 closed 5 years ago

Nesh108 commented 6 years ago

Hello,

Is it possible to see the kinect as a video device on Ubuntu? I was interested in being able to see the RGB and Depth stream as a v4l2 device to be used with other software.

Is there any driver available?

xlz commented 6 years ago

https://github.com/yoshimoto/gspca-kinect2

This doesn't seem to include decoding of raw ir frames into depth frames.

Nesh108 commented 6 years ago

Thanks for the reply, @xlz!

I did indeed test those drivers but it seems that the modules only work with pure v4l2 and VLC. Everything else sees the devices but cannot get the frames.

Since my first goal was to be able to stream video from the Kinect to OBS, I managed to find a workaround by streaming the VLC window but that's less than optimal, as most other application don't allow that.

Would it be possible to give some love to linux libfreenect2 users? 😄

gattytto commented 6 years ago

the gspca-kinect2 is not a solution so far, and I am too looking for a solution to this. @xlz so far I can't find any app/driver for easy /dev/* access to microphone+RGB which is what I want as of now. maybe the python binding would be the right path to make a quick app for a vlc rtsp stream or something like that.

Nesh108 commented 6 years ago

@gattytto if you are interested, we look into the problem together and make an app for this. I'm sure many people would enjoy it. Let me know! :smile:

aenertia commented 6 years ago

Can we not just feed v4l2loopback from protonect directly? - this should be relatively trivial and it will mean that the jpeg conversion would have already been offloaded by the vaapi engine.

aenertia commented 6 years ago

FYI - after modifying the viewer.cpp example to size to 1920x1080 and changing the location of the rgb texture to get a full frame - I feed this into ffmpeg and can then feed v4l.

I tried to figure out how to figure out how to do the yuv24 conversion in glfw and/or have it write directly to a file rather than a window - but it was well beyond my glfw foo. Would be nice to have this added to protonect using libav. Even cleaning up the viewer so when --nodepth is passed it relocates the rgb render to be the full window frame by default would be nice. but the gl's they is hurt my brain.

gattytto commented 5 years ago

pls share the code changes

SterlingButters commented 4 years ago

@aenertia Can you provide exact steps on how you fed to ffmpeg?

akeyx commented 3 years ago

@SterlingButters You need to first get v4l2-loopback and gspca-kinnect2 (https://github.com/yoshimoto/gspca-kinect2). You need to load v4l2loopback with exclusive_caps=yes.

Then, if gspca_kinect2 module loaded fine...

use: v4l2-ctl --list-devices to discover your input (kinect2) and output (v4l2loopback) devices:

# v4l2-ctl --list-devices
bcm2835-codec-decode (platform:bcm2835-codec):
        /dev/video10
        /dev/video11
        /dev/video12

bcm2835-isp (platform:bcm2835-isp):
        /dev/video13
        /dev/video14
        /dev/video15
        /dev/video16

OBS Virtual Camera (platform:v4l2loopback-000):
        /dev/video0

Kinect v1 YUY2 (platform:v4l2loopback-001):
        /dev/video1

Kinect v2 YUY2 (platform:v4l2loopback-002):
        /dev/video2

Xbox NUI Sensor (usb-0000:01:00.0-2):
        /dev/video3
        /dev/video4

Here you can see my Kinect2 being available as /dev/video3 (RGB) and /dev/video4 (depth) and my 3 v4l2loopback devices video0,video1 and video2. It doesn't really matter which v4l2oopback you gonna use as long as it's been loaded with exclusive_caps=yes.

So finally you can run ffmpeg like this:

ffmpeg -nostdin -i /dev/video3 -filter:v scale=-1:720,hflip -pix_fmt yuyv422 -f v4l2 /dev/video2

And that will allow chrome and other programs requiring YUYV422 pixel format access kinect2 as a webcam.

Note: If you're planning to use this as you daily driver for a webcam - forget it. I wanted that and unfortunately it's way TOO laggy and CPU intensive. The camera image will be SUPERB! Maybe a bit overexposed but that can ben corrected to an extent with ffmpeg filters but it will start lagging very quickly making this whole thing not worth it. I have used similar solution with my kinect v1 as main webcam and it was pretty slick and transparent to CPU resources but obviously kinect v1 resolution isn't that great hence wanted to replace it with kinect v2. Eventually I gave up on kinect v2 and bought a decent webcam (AVerMedia PW513).

aenertia commented 3 years ago

I never had luck getting the gspca module in Red Hat/fedora. I modified the protonect program to just output the RGB frames and using cuda latency was fine (pulled around 1-2ms lag). From their just captured the window surface in OBS.

On Mon, 8 Mar 2021, 10:24 pm akeyx, notifications@github.com wrote:

@SterlingButters https://github.com/SterlingButters You need to first get v4l2-loopback and gspca-kinnect2 ( https://github.com/yoshimoto/gspca-kinect2). You need to load v4l2loopback with exclusive_caps=yes.

Then, if gspca_kinect2 module loaded fine...

use: v4l2-ctl --list-devices to discover your input (kinect2) and output (v4l2loopback) devices:

v4l2-ctl --list-devices

bcm2835-codec-decode (platform:bcm2835-codec): /dev/video10 /dev/video11 /dev/video12

bcm2835-isp (platform:bcm2835-isp): /dev/video13 /dev/video14 /dev/video15 /dev/video16

OBS Virtual Camera (platform:v4l2loopback-000): /dev/video0

Kinect v1 YUY2 (platform:v4l2loopback-001): /dev/video1

Kinect v2 YUY2 (platform:v4l2loopback-002): /dev/video2

Xbox NUI Sensor (usb-0000:01:00.0-2): /dev/video3 /dev/video4

Here you can see my Kinect2 being available as /dev/video3 (RGB) and /dev/video4 (depth) and my 3 v4l2loopback devices video0,video1 and video2. It doesn't really matter which v4l2oopback you gonna use as long as it's been loaded with exclusive_caps=yes.

So finally you can run ffmpeg like this:

ffmpeg -nostdin -i /dev/video3 -filter:v scale=-1:720,hflip -pix_fmt yuyv422 -f v4l2 /dev/video2

And that will allow chrome and other programs requiring YUYV422 pixel format access kinect2 as a webcam.

Note: If you're planning to use this as you daily driver for a webcam - forget it. I wanted that and unfortunately it's way TOO laggy and CPU intensive. The camera image will be SUPERB! Maybe a bit overexposed but that can ben corrected to an extent with ffmpeg filters but it will start lagging very quickly making this whole thing not worth it. I have used similar solution with my kinect v1 as main webcam and it was pretty slick and transparent to CPU resources but obviously kinect v1 resolution isn't that great hence wanted to replace it with kinect v2. Eventually I gave up on kinect v2 and bought a decent webcam (AVerMedia PW513).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenKinect/libfreenect2/issues/975#issuecomment-792613253, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACF5L3I27ATRD7RCKW4HBDTCSJURANCNFSM4EV26NGQ .

akeyx commented 3 years ago

I managed to compile it on Fedora 33. I have to admit that I've used some branches from pull requests etc. Can't remember now but I do have the source which still compiles fine. Can upload it to github if required. I needed it just on my X1 Carbon but I have no GPU other than Intel hence all had to be CPU based.