rodizio1 / EZ-WifiBroadcast

Affordable Digital HD Video Transmission made easy!
GNU General Public License v2.0
826 stars 200 forks source link

Using OpenCV with EZ-WifiBroadcast #71

Open tictag opened 6 years ago

tictag commented 6 years ago

Hello Rodizio,

Just got my first video stream to work [RPi Zero()W with ZeroCam to RPi3 using 2 x RT5370 based nano USB adapters] and it looks great! :) Really fantastic work (from you and Befinitiv)!!

My query does not need a comprehensive answer, I don't mind doing all the digging to get something working, but I don't want to waste my time investigating if my use case simply isn't an option. I plan to take two seperate video sources connected to a RPi, do some rudimentary image processing then merge them together using OpenCV ... and here's the query ... could I then use EZ-WifiBroadcast to transmit that composite video stream to a receiver?

I've read every page of your wiki and everything revolves around broadcasting a source video from Tx to Rx. Basically, can my source video actually be an output stream from OpenCV?

I don't mind putting the hours in to investigate/troubleshoot etc but not if this use case is simply not do'able.

I would appreciate your thoughts.

Oh, and if you don't get back to me before new year, Happy New year!! :)

careyer commented 6 years ago

@ivanskj : Ahhh! I think I do understand now what you mean.... I was considering that you suggested to apply pay/depay to the already h264 encoded data. Now I understand that your idea is to shape the rawdata with pay/depay. Sorry I was missunderstanding you.

Indeed that is what I tried in the very beginning. Unfortunately gstreamer only allows payloading into RTP (Real Time Protocol) or GDP (Gstreamer Data Protocol) packets. TX: gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw ! rtpvrawpay ! fdsink fd=1

RX: gst-launch-1.0 filesrc location=/root/videofifo ! video/x-raw ! rtpvrawdepay ! autovideosink

However I believe pay/depayis not the reason why the current h264 solution works. My understanding is that pay/depay only loads and extracts the payload data into the above mentioned network protocol packets. However we do not need network protocol packets in a EZ-Wifibroadcast world ;-).

I think it is rather the parse part that makes it work because the parse part tries to identify a certain payload type (e.g. h264parse detects h264 encoded frames in a stream of data) and forwards it for processing, all the rest is discarded - so exactly what we want.

So in an ideal scenario the pipelines would look like this: TX: gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw ! rtpvrawpay ! fdsink fd=1

RX: gst-launch-1.0 filesrc location=/root/videofifo ! video/x-raw ! rtpvrawdepay ! rawvideoparse ! autovideosink

For whatever reason some of the Gstreamer elements which could do the trick such as:

can not be found on my raspi even though I have installed gstreamer-plugins-good, gstreamer-plugins-bad, gstreamer-plugins-ugly completely. Any idea why those ones are missing? The only element available is videoparse (i used that in my above tests to no avail - see distrubed test image) which is deprecated and should be replaced with rawvideoparse.

Cheers!

3dpinta commented 6 years ago

Which FlirOne version are you using (which gen and connector) and how are you connecting it to the Pi? Also, which version of the Pi are you using for Tx? I would love to do this as well as I have been looking at doing so for a year with my Seek Thermal but that's not live video I can get wirelessly.

On Jan 30, 2018 08:59, "careyer" notifications@github.com wrote:

@ivanskj https://github.com/ivanskj : Ahhh! I think I do understand now what you mean.... I was considering that you suggested to apply pay/depay to the already h264 encoded data. Now I understand that your idea is to shape the rawdata with pay/depay. Sorry I was missunderstanding you.

Indeed that is what I tried in the very beginning. Unfortunately gstreamer only allows payloading into RTP (Real Time Protocol) or GDP (Gstreamer Data Protocol) packets. TX: gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw ! rtpvrawpay ! fdsink fd=1

RX: gst-launch-1.0 filesrc location=/root/videofifo ! video/x-raw ! rtpvrawdepay ! autovideosink

However I believe pay/depayis not the reason why the current h264 solution works. My understanding is that pay/depay only loads and extracts the payload data into the above mentioned network protocol packets. However we do not need network protocol packets in a EZ-Wifibroadcast world ;-).

I think it is rather the parse part that makes it work because the parse part tries to identify a certain payload type (e.g. h264parse detects h264 encoded frames in a stream of data) and forwards it for processing, all the rest is discarded - so exactly what we want.

So in an ideal scenario the pipelines would look like this: TX: gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw ! rtpvrawpay ! fdsink fd=1

RX: gst-launch-1.0 filesrc location=/root/videofifo ! video/x-raw ! rtpvrawdepay ! rawvideoparse ! autovideosink

For whatever reason some of the Gstreamer elements which could do the trick such as:

-

rawvideoparse (gstreamer-plugins-bad: Converts unformatted data streams into timestamped raw video frame)

unalignedvideoparse (gstreamer-plugins-bad: Parse unaligned raw video data)

can not be found on my raspi even though I have installed gstreamer-plugins-good, gstreamer-plugins-bad, gstreamer-plugins-ugly completely. Any idea why those ones are missing?

Cheers!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bortek/EZ-WifiBroadcast/issues/71#issuecomment-361601227, or mute the thread https://github.com/notifications/unsubscribe-auth/AEI5jnAyW4hWZ4D6H1_5gEawhMH5LiE_ks5tPyBegaJpZM4ROkO0 .

ivanskj commented 6 years ago

Now i understand some of my problems when i first started with this. Possibly lots of gibberish being sent along with the videodata.

Can you check if you have the correct sub-package of plugins-bad installed, using gst-inspect? I compiled a newer version of gstreamer to get access to all the missing plugins. Thats a though job with alot of pits to fall into.

Would it be possible to pipe the raw output from the device without going through gstreamer? i.e. cat /dev/video3 | tx .... ?

rodizio1 commented 6 years ago

@careyer What bitrate do you get from the flir cam? Is it correct, that it puts out 10 160x120 jpegs per second? That doesn't sound like much.

@ivanskj Maybe it makes sense to not use gstreamer at the TX at all, this way you'd be transferring just the jpegs over the wbc link, then all you'd need is some jpeg display program at the RX.

Regarding the pipes and stdout etc. Does gstreamer really output both it's console text output as well as the video on stdout? Have a hard time believing that, but who knows :)

ivanskj commented 6 years ago

Thanks, I will test that out for sure. Maybe it will give a low latency solution when taking the encoding part out of the equation.

I believe gstreamer has some environment variables controlling log-level. But yeah, sounds strange that it would output all that info to fdsink. It does print alot of other warning/debug info when something is configured wrongfully.

ivanskj commented 6 years ago

Any tips to how i can measure the bitrate of the videofeed?

careyer commented 6 years ago

@rodizio1 : Yes the FLIR One outputs 160x120 @ 10fps. If the output is in mjpeg format i cannot say for sure though. The resolution does not sound much but it is actually pretty decent for a consumer grade thermal camera. As you can image compressing that tiny amount of data with h264 even further results in a very very low datarate for EZ-Wifibroadcast to be transfered and causes some problems -e.g it takes longer as anticipated to fill all those datapackets before they are being transmitted. That is the main reason why I try to transmit raw (uncompressed) data at the end of the day, which results in a higher datarate. A proof of concept transmitting via RTP/UDP shows that this works indeed nicely with very little lag and no problems at all. Hoewever the aim is to go with the allmighty EZ-Wifibroadcast since it has many advantages ( you know the best) ;-). My current implementation (using h264 compression for the sake of nessesity right now) with EZ-Wifiboradcast still suffers from more lag than the original RTP/UDP PoC - but has probably more range and robustness. However there is room to improve. Being able to fly with visible light and thermal vision on a multicopter has so much potential!

Recarding your question about gstreamer and piping stuff out on stdout: I think it really is true - there are several reports about that. That is why stdout is somewhat problematic. As @tbeauchant supposed it is a cleaner solution to use a custom fifo (generated with mkfifo) instead of stdout which sorts out all sorts of troubles. You never know what process or subprocess probably outputs random stuff to stdout at any given time. It would be a blaze to see something like this implemented (at least optionally via a parameter) in a future tx_rawsock. That would ensure an absolutely "clean" and transparent channel. 👍 👍 👍

@ivanskj : How did you manage to compile the most recent versions of the gstreamer good, bad and ugly plugins? Can you give me some guidande how to do that? That would be very much appreciated. I would really like to give it a go with the more recent gstreamer elements like rawvideoparse etc ... :-)

@3dpinta : Using a Flir One Gen2 for Android (Micro USB). Any Raspi will do but since connected via USB you should at least have two USB ports (WiFi + thermal camera). The Flir One Pro Gen3 is yet untested. It would be nice to know if it works too. The respective Linux driver has been reverse engineered by some clever guys from eevblog. The driver in combination with the l4l2loopback kernel module provides two virtual video decices from the Flir: Thermal (10fps) + HD visible light (10fps). Together with a Raspi cam module (30fps+) you have three videostreams to choose from! A brilliant usecase to using the multi-videostream capabilities of EZ-Wifibroadcast. Since the thermal videostream consumes only very little bandwith it should be no problem to transfer both video signals at the same time: thermal + raspi cam. That would allow the user to choose what video he wants to see via his RC-remote at the flick of a switch. ;-)

Besides all that it just looks AWESOME! #Predator-Vision #HardcoreWifiBroadcastFan

rodizio1 commented 6 years ago

Recarding your question about gstreamer and piping stuff out on stdout: I think it really is true - there are several reports about that. That is why stdout is somewhat problematic. As @tbeauchant supposed it is a cleaner solution to use a custom fifo (generated with mkfifo) instead of stdout which sorts out all sorts of troubles. You never know what process or subprocess probably outputs random stuff to stdout at any given time. It would be a blaze to see something like this implemented (at least optionally via a parameter) in a future tx_rawsock. That would ensure an absolutely "clean" and transparent channel.

Sigh. Read-up again about pipes, stdin, stdout,stderr. tx_rawsock does not mix the output data stream and it's status messages. That's actually not possible because it doesn't even send the datastream to stdout but to the wificard. The rx also doesn't suffer from this (there it would be possible because it actually does put out the datastream to stdout) because it simply prints status messages to stderr. I would've assumed gstreamer does the same that's why I didn't believe that such a stupid problem exists with it.

3dpinta commented 6 years ago

@careyer

Thanks for the response. I may pick up a Gen3 and give it a try. Do you have a good set of instructions on how to get it running with the Gen2? If so, I can give it a go with the Gen3 once I pick one up.

On Tue, Jan 30, 2018 at 5:18 PM, careyer notifications@github.com wrote:

@rodizio1 https://github.com/rodizio1 : Yes the FLIR One outputs 160x120 @ 10fps. If the output is in mjpeg format i cannot say for sure though. The resolution does not sound much but it is actually pretty decent for a consumer grade thermal camera. As you can image compressing that tiny amount of data with h264 even further results in a very very low datarate for EZ-Wifibroadcast to be transfered and causes some problems -e.g it takes longer as anticipated to fill all those datapackets before they are being transmitted. That is the main reason why I try to transmit raw (uncompressed) data at the end of the day, which results in a higher datarate. A proof of concept transmitting via RTP/UDP shows that this works indeed nicely with very little lag and no problems at all. Hoewever the aim is to go with the allmighty EZ-Wifibroadcast since it has many advantages ( you know the best) ;-). My current implementation (using h264 compression for the sake of nessesity right now) with EZ-Wifiboradcast still suffers from more lag than the original RTP/UDP PoC - but has probably more range and robustness. However there is room to improve. Being able to fly with visible light and thermal vision on a multicopter has so much potential!

Recarding your question about gstreamer and piping stuff out on stdout: I think it really is true - there are several reports about that. That is why stdout is somewhat problematic. As @tbeauchant https://github.com/tbeauchant supposed it is a cleaner solution to use a custom fifo (generated with mkfifo) instead of stdout which sorts out all sorts of troubles. You never know what process or subprocess probably outputs random stuff to stdout at any given time. It would be a blaze to see something like this implemented (at least optionally via a parameter) in a future tx_rawsock. That would ensure an absolutely "clean" and transparent channel. 👍 👍 👍

@ivanskj https://github.com/ivanskj : How did you manage to compile the most recent versions of the gstreamer good, bad and ugly plugins? Can you give me some guidande how to do that? That would be very much appreciated. I would really like to give it a got with the more recent gstreamer elements like rawvideoparse and so on. :-)

@3dpinta https://github.com/3dpinta : Using a Flir One Gen2 for Android (Micro USB), any Raspi will do but since connected via USB you should at least have two USB ports (WiFi + thermal camera). The Flir One Pro Gen3 is yet untested. It would be nice to know though if it works. The respective Linux driver has been reverse engeneered by some clever guys from eevblog. The driver in combination with the l4l2loopback kernel module provides two videostreams from the Flir: Thermal + HD visible light.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bortek/EZ-WifiBroadcast/issues/71#issuecomment-361753697, or mute the thread https://github.com/notifications/unsubscribe-auth/AEI5josyZNLsXdEmen9f6pi8mEliWtCXks5tP5UmgaJpZM4ROkO0 .

rodizio1 commented 6 years ago

@careyer just looked at the video, if it's just around 80kbit, you need to reduce the packetsize and fec depth a lot more. I'd try something like 4/2/256 or 2/2/256, may need a little experimenting.

@ivanskj Regarding the bitrate measuring: 1.6 shows the live and max. measured bitrate on the osd. Or use 'pv'. The 1400byte packet size you see when cts is enabled was a 'hack' used in 1.5 to be able to use the same video bitrate (it was 6mbit fixed in 1.5) for cts and non-cts modes. The CTS reduces bandwitdh, I basically 'bought it back' by using larger packets (which have less overhead and thus give more bandwidth).

In 1.6 this isn't the case anymore as the bandwidth gets measured, i.e. it will just be a little lower when using cts (and leaving the default 1024bytes packetsize).

ivanskj commented 6 years ago

Sweet! I am now confident 1.6 will provide a robust solution. I’ll experiment with these new features. Thanks for the info!

@careyer: I used various guides online to compile gstreamer and plugins. It was a painful process. I can look through my bookmarks and see if i can find it again. V1.6 uses newer kernels. Maybe it has updated repository for apt-get? Precompiled is alot easier if possible.

careyer commented 6 years ago

@ivanskj : That would be very much appreciated. I indeed made a apt-get update and I have installed the following plugins:

gstreamer1.0-plugins-good; gstreamer1.0-plugins-bad; gstreamer1.0-plugins-ugly

These should be the most recent ones. However some gstreamer elements (specifically) rawvideoparse is missing (some others as well).

@rodizio1 : I tried 2/2/256 and it works indeed better. Just wondering if that will interfere with my long term plan to transmit two video streams (Raspi HD + Thermal) at the same time? Do those settings influence the combined data (video1+video2+telemetry) or each stream seperatly? If it works combined I might be able to go back to standard settings since the HD-video fills up the packets fast enough so that the thermal video will not suffer from too much lag. That would be awesome!

@3dpinta : You might want to have a look at http://diydrones.com/profiles/blogs/thermal-imaging-for-your-drone-on-a-budget . There is everything you will need

ivanskj commented 6 years ago

https://lists.freedesktop.org/archives/gstreamer-openmax/2013-March/000724.html

This should be the guide i was able to compile with.

careyer commented 6 years ago

@ivanskj : Thank you very much! Very much appreciated. Maybe you can check viagst-inspect1.0 | grep raw if you have the rawvideoparse element in your setup?

ivanskj commented 6 years ago

The build is not accessible any more since I have started my 1.6 version and the SDcard is overwritten. Is there not any mentions as to what versions those plugins where added?

careyer commented 6 years ago

I have tested a bit further... As it seems to me it works with h264 encoding/decoding because the gstreamer pipeline element h264parse is able to determine how a h264 frame looks like, e.g. where such a frame starts and ends.

For a stream of raw data however this is not true. The video looks kind of scrambled and misaligned because the gstreamer pipeline element videoparse (this parses raw video) starts at a random point in the received data to interpret data as a frame. This causes a constant offset from were a frame in reality starts. That is also why I can sometimes see the OSD (which is usually at the bottom of the screen) at a random position in the video that is rendered at the GroundPi.

Does anyone know how we can get this "in sync"? Update:: I did some research in other forum threads and in the gstreamer IRC chat. What needs to be done is "framing" the data stream. Framing is needed to determine start/end of a videoframe and to compensate for lost data (e.g. skip a frame). However there is no "framing" without newly encoding the data available yet. :-(.

Cheers!

BTW: The current solution (with overhead h264 en-/decoding) now also works from a Pi0. It did not initially work. v4l2loopback kernel module had to be recompiled because Pi3 and Pi0 start different kernel versions.

careyer commented 6 years ago

@rodizio1 : I need a little help please ... Somehow I am stuck:

On the AirPi I am sending two videostreams now via invoking tx_rawsock twice with two different ports:

<HD video> | tx_rawsock -p 0 ...
<Thermal video> | tx_rawsock -p 10 ...

On the GroundPi I can successfully receive both video streams with the standard line of RX-code by just altering the "-p port" parameter and reboot so that the change takes effect:

ionice -c 1 -n 3 /root/wifibroadcast/rx -p 10 -d 1 -b $VIDEO_BLOCKS -r $VIDEO_FECS -f $VIDEO_BLOCKLENGTH $NICS | ionice -c 1 -n 4 nice -n -10 tee >(ionice -c 1 -n 4 nice -n -10 /root/wifibroadcast_misc/ftee /root/videofifo2 > /dev/null 2>&1) >(ionice -c 1 nice -n -10 /root/wifibroadcast_misc/ftee /root/videofifo4 > /dev/null 2>&1) >(ionice -c 3 nice /root/wifibroadcast_misc/ftee /root/videofifo3 > /dev/null 2>&1) | ionice -c 1 -n 4 nice -n -10 /root/wifibroadcast_misc/ftee /root/videofifo1 > /dev/null 2>&1

With "-p 0" everthing works like expected: HD-Video + OSD show up. However as soon as I put "-p 10" instead of "-p 0" the (other) video will still show but the OSD won't display anymore!?

I noticed that even with the AirPi not running at all the GroundPi (with -p 0 set) will display the OSD immediatly once booted and will wait (with black screen) for a connection. As soon as I set -p 10 the OSD won't show up at all. I also noticed that i can exit the HD-video (-p 0) with Crtl-C and the video will stop and the last frame will disapear, when I set -p 10 the video will stop but the last frame will stay on top of the CLI.

What am I getting wrong here? Your help is very much appreciated! Thanks!

P.S: Is it possible to send the two videostreams with different VIDEO_BLOCKS / VIDEO_FECS / VIDEO_BLOCKLENGTH settings for each of the streams? Sending with the same seetings does not work very well due to opposed requirements. HD needs the standard settings (like 8/4/1450, not sure if that is correct) whereas Thermal needs 2/2/256

Update : BTW is there a Miniumum VIDEO_BLOCKS / VIDEO_FECS / VIDEO_BLOCKLENGTH setting in order to use rc uplink (rc=mavlink)? I noticed that with 2/2/256 the rc-control does not work anymore. It does work with the standard settings though

careyer commented 6 years ago

@rodizio1 : Sorry... I don't want to bug you. But do you have any hints for me on the last post? I was not able to make any further progress because I don't understand why these things happen. Thank you very much for your help!

careyer commented 6 years ago

Finally was able to answer my own questions:

With "-p 0" everthing works like expected: HD-Video + OSD show up. However as soon as I put "-p 10" instead of "-p 0" the (other) video will still show but the OSD won't display anymore!?

This issue is somehow solved in 1.6RC5. Thermal video and(!) OSD are displayed together now. However the video will stutter every 5sec or so for a moment and then resume. Found out that this is due to some keep-alive checks that are hardcoded for Port 0 only, i.e: If I receive only on Port -p 10 (and -p 0 is missing), then rx and hellovideo will be killed and restarted periodically.

P.S: Is it possible to send the two videostreams with different VIDEO_BLOCKS / VIDEO_FECS / VIDEO_BLOCKLENGTH settings for each of the streams?

I tried this and it works like a charm with different settings for each stream. However this is only possible with rx and tx_rawsock from 1.6RC3. In 1.6RC5 rx and tx_rawsock have somehow been updated and recompiled. Their command-line still accepts ports other than the standard port 0 but putting ports >0 will result in errors for both rx and tx_rawsock, see #95

tictag commented 6 years ago

As some of you may know, I've been poorly these past few months. Getting on top it now. I am gobsmacked to see how this thread has progressed! Mind == Blown! Going to try to catch up and hopefully contribute once I have.

johnboiles commented 6 years ago

I'm doing something similar using an Insta360 Air dual-fisheye camera. I'm able to stream successfully using ffmpeg:

ffmpeg -r 30 -copytb 0 -f v4l2 -vcodec h264 -i /dev/video0 -vcodec copy -f h264 - | /root/wifibroadcast/tx_rawsock -p 0 -b 24 -r 12 -f 768 -t 2 -d 18 -y 0 00c0ca96bbbf

I have an issue however where the RX pi is unable to decode video when the RX pi restarts. Restarting the TX pi fixes the issue. This makes me think there's something about the start of the h264 stream that's necessary for the video to play. I noticed there's a -ih parameter to raspivid for sending inline headers. Perhaps I need to figure out something similar with gstreamer or ffmpeg? Have any of y'all run into something similar?

careyer commented 6 years ago

Hi @johnboiles,

so you are transmitting the Insta360 video as the only(primary) video stream - i.e. you are not using the RaspiCam at all? I indeed think that you need to send the inline headers so that HelloVideo can recognize with what settings it should decode the data. You might also try to use a different H264 Player (e.g. Gstreamer instead of HelloVideo for debugging purposes).

johnboiles commented 6 years ago

No not using the RaspiCam. I was able to solve the issue by using this 'psips' project to inject SPS and PPS headers into the h264 stream like this

ffmpeg -r 30 -copytb 0 -f v4l2 -vcodec h264 -i /dev/video0 -vcodec h264 -f h264 -r 10 -g 10 -b:v 2000k - | /root/psips/psips  | /root/wifibroadcast/tx_rawsock -p 0 -b 24 -r 12 -f 768 -t 2 -d 18 -y 0 00c0ca96bbbf
johnboiles commented 6 years ago

FYI it worked beautifully! This project is fantastic! Here is the live stream we pushed to Periscope after receiving it on the ground

https://www.pscp.tv/w/1eaJbVdwYDnJX?t=2h27m27s (drag the video to look around -- I recommend looking up at the balloon a little after 3 hours in to see the balloon pop!)

bortek commented 6 years ago

@johnboiles wow did you really use ezwfb to transmit the video stream? Which medium did you use that manages such a long distance?

johnboiles commented 6 years ago

We did! We used 5.5mbps mode with a 2mbps video stream @ 1920x960 10fps 10 frame gops, 12/24/768 FEC mode, a big parabolic dish on the ground and a custom cut quarter wave whip and ground plane antenna in the air. Worked beautifully! We calculated a theoretical distance of up to ~40 mi, and I think we got ~20-30 mi this flight (though the APRS tracker glitched out at altitude so I'm not sure exactly).

bortek commented 6 years ago

Amazing! 😀 Congratulations to you, nicely done, I saw the video.

EZWFB in stratosphere, that's a milestone. Great work @rodizio1 and other contributors. 👍

careyer commented 6 years ago

Awesome! 2nd/Alternative Camera Setup rules! What Kind of Insta360 did you use? As far as i know there are different Models? Drivers?

johnboiles commented 6 years ago

Insta360 Air w/ micro USB ($100). Shows up in Linux as a UVC webcam, so you can use video4linux2 to get video from it. No extra drivers needed. FWIW you could probably also add a couple mirrors and use the same camera as a stereoscopic rig (though you might need to flip the images)!

geofrancis commented 6 years ago

Could you share exactly how you got a UvC device to work as a camera with wi-fi broadcast?

johnboiles commented 6 years ago

@geofrancis take a look at the above comment -- it contains my ffmpeg command that I piped into tx_rawsock

careyer commented 6 years ago

@johnboiles : Thank you very much. I have also ordered a insta360 Air now. Could you maybe explain a little bit further how you saved and processed the footage? I wonder in which format (aspect ratio etc.) the image is transmitted via EZ-WBC? What does it look like when played back via HDMI output on the GroundPi? How did you mange to save the footage? Did you record the streamed footage at the GroundPi? USB-Stick? Thank you very much!

Ohh! And how did you manage the decoding at the GroundPi? Does HelloVideo decode the video without any changes needed or have you used another decoding pipeline? If yes it would be nice if you could post the RX side of things as well. Thanks! Camera arrived today and it is great fun!

geofrancis commented 6 years ago

there are so many good things going on in this thread that could be added to the mainstream wifibroadcast.

@rodzio you really need to update the github repository with the latest code, it will make it much easier for people to contribute.

johnboiles commented 6 years ago

+1 I have a handful of changes I'd be happy to contribute!

@careyer I didn't save it locally, just streamed it to Periscope. Ideally I would have saved the raw stream from the camera on a USB stick on the transmitter, but I didn't get time to test that. The format is dual-fisheye natively so that's what I saw at the ground station. We forwarded packets over UDP to a server using a modified version of rx.c, then i piped the output of that to ffmpeg using a remap filter to stitch the video to equirectangular and then sent that out over RTMP. HelloVideo handled the h264 stream just fine. I'm planning to do a more thorough writeup of the project soon.

careyer commented 6 years ago

Thank you @johnboiles. A more thorough writeup would be very much appreciated. I love what you've done with your project. Since you replaced the PiCam with the insta360Air (i.e. you are using port "-p 0") it should be possible to just plug a USB stick to the GroundPi after the flight. It then should save the last ~15min of footage to the USB stick then. My insta360 was just delivered. Awesome little tool - readily available and cheap as beans. Cost only about 50$ brandnew.

careyer commented 6 years ago

@johnboiles : Trying to get the insta360 working as well... How did you manage to install ffmpeg to the Raspi?

johnboiles commented 6 years ago

Looking at my notes I think I followed this tutorial

careyer commented 6 years ago

Alright! Thank you John =) I though there might be a sleeker way to do it.... my poor Pi0 is compiling for almost 13 hours now! Eiiikk! Would you mind sharing your notes for setting this up? Already stumbled into some issues (that I could happily solve)... however it might be easier for others to comprehend and not stumble into the same traps. I am still not done yet

careyer commented 6 years ago

@johnboiles :+1: Okay, ffmpeg compiled nicely! Took 14h on a Pi0. (waiting ain't much fun though ;-)... is there a specific reason why you use ffmpeg instead of gstreamer?). ffmpeg and psips seem to run okay... However I experience a problem. No video packets make it to the GroundPi (the packet counter does not indicate incomming packets). I therefore disabled the respective TX command line in .profile and started it manually form the shell to have a glimpse to the output.

This is what I get: (BTW: my insta360 is at /dev/video5 .... I varified with v4l2-ctl --list-devices)

ffmpeg -r 30 -copytb 0 -f v4l2 -vcodec h264 -i /dev/video5 -vcodec h264 -f h264 -r 10 -g 10 -b:v 2000k - | /root/psips/psips | /root/wifibroadcast/tx_rawsock -p 0 -b 24 -r 12 -f 768 -t 2 -d 18 -y 0

tx_rawsock (c)2017 by Rodizio. Based on wifibroadcast tx by Befinitiv. GPL2 licensed. using RTS frames ffmpeg version git-2018-07-19-93e157f Copyright (c) 2000-2018 the FFmpeg developers built with gcc 4.9.2 (Raspbian 4.9.2-10) configuration: --arch=arme1 --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree libavutil 56. 18.102 / 56. 18.102 libavcodec 58. 21.105 / 58. 21.105 libavformat 58. 17.101 / 58. 17.101 libavdevice 58. 4.101 / 58. 4.101 libavfilter 7. 26.100 / 7. 26.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 libpostproc 55. 2.100 / 55. 2.100 Input #0, video4linux2,v4l2, from '/dev/video5': Duration: N/A, start: 158.099715, bitrate: N/A Stream #0:0: Video: h264 (Baseline), yuv420p(progressive), 1920x960, 30 fps, 30 tbr, 1000k tbn, 2000k tbc Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264)) Press [q] to stop, [?] for help Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected

Any clue why this Warning: Lost connection to stdin. Please make sure that a data source is connected gets fired?

tbeauchant commented 6 years ago

Looks like tx_rawsock is not getting any data on stdin. Probably ffmpeg is not outputting anything. Try to have ffmpeg alone output to a file and see if there's data in it. If not, then you'll have to figure out why ffmpeg is not working properly.

Theo

Le ven. 20 juil. 2018 à 11:51, careyer notifications@github.com a écrit :

@johnboiles https://github.com/johnboiles 👍 Okay, ffmpeg compiled nicely! Took 14h on a Pi0. ffmpeg and psips run okay... However I experience a problem. No video packets make it to the GroundPi (the packet counter does not indicate incomming packets). I therefore uncommented the respective command line for transmitting in .profile and started it form the shell. This is the output I get:

ffmpeg -r 30 -copytb 0 -f v4l2 -vcodec h264 -i /dev/video5 -vcodec h264 -f h264 -r 10 -g 10 -b:v 2000k - | /root/psips/psips | /root/wifibroadcast/tx_rawsock -p 0 -b 24 -r 12 -f 768 -t 2 -d 18 -y 0

tx_rawsock (c)2017 by Rodizio. Based on wifibroadcast tx by Befinitiv. GPL2 licensed. using RTS frames ffmpeg version git-2018-07-19-93e157f Copyright (c) 2000-2018 the FFmpeg developers built with gcc 4.9.2 (Raspbian 4.9.2-10) configuration: --arch=arme1 --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree libavutil 56. 18.102 / 56. 18.102 libavcodec 58. 21.105 / 58. 21.105 libavformat 58. 17.101 / 58. 17.101 libavdevice 58. 4.101 / 58. 4.101 libavfilter 7. 26.100 / 7. 26.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 libpostproc 55. 2.100 / 55. 2.100 Input #0, video4linux2,v4l2, from '/dev/video5': Duration: N/A, start: 158.099715, bitrate: N/A Stream #0:0: Video: h264 (Baseline), yuv420p(progressive), 1920x960, 30 fps, 30 tbr, 1000k tbn, 2000k tbc Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264)) Press [q] to stop, [?] for help Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected Warning: Lost connection to stdin. Please make sure that a data source is connected

Any clue why this Warning: Lost connection to stdin. Please make sure that a data source is connected gets fired?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bortek/EZ-WifiBroadcast/issues/71#issuecomment-406550575, or mute the thread https://github.com/notifications/unsubscribe-auth/AhJSA80OOy3_GNYJ8ghBcgRoV6GGRpfZks5uIaglgaJpZM4ROkO0 .

careyer commented 6 years ago

Update: Insta360 Air works quite nicely also without ffmpeg (which needs compiling for hours & hours) but with making use of gstreamer. Very similar to my FLIR One implementation:

nice -n -9 gst-launch-1.0 v4l2src device=/dev/video5 ! video/x-h264,width=1920,height=960,framerate=30/1 ! h264parse config-interval=3 ! fdsink fd=1 | nice -n -9 /root/wifibroadcast/tx_rawsock -p 0 -b $VIDEO_BLOCKS -r $VIDEO_FECS -f $VIDEO_BLOCKLENGTH -t $VIDEO_FRAMETYPE -d $VIDEO_WIFI_BITRATE -y 0 $NICS

Works like a charm! Cheers!

pullenrc commented 6 years ago

Hello! hopefully I can get some assistance, particularly with compiling opencv on the EZ WFB 1.6 image. I keep getting the error when compiling: /root/opencv-3.3.1/modules/python/src2/cv2.cpp:1674:1: fatal error: error writing to /tmp/ccvCBSyU.s: No space left on device } ^ compilation terminated.

I get that I am running out of space somewhere, However I am using a 64GB class 10 SD card, so I am confused.

I have tried several times(6), using different size SD cards, using raspi-config to expand the file system, using uncompiled headers, and it keeps failing. I am not sure where to turn next, any and all advice would be greatly appreciated. I would rather be guided to a solution, but I'm not above slipping a Jackson to somebody that can host an image that includes opencv 3.3.

Thanks in advance!

tbeauchant commented 6 years ago

Hi,

I see the build process seems to write stuff to the /tmp dir, which is a RAM disk. So my guess would be that you are running out of ram.

Theo

Le ven. 7 sept. 2018 à 05:33, pullenrc notifications@github.com a écrit :

Hello! hopefully I can get some assistance, particularly with compiling opencv on the EZ WFB 1.6 image. I keep getting the error when compiling: /root/opencv-3.3.1/modules/python/src2/cv2.cpp:1674:1: fatal error: error writing to /tmp/ccvCBSyU.s: No space left on device } ^ compilation terminated.

I have tried several times(6), using different size SD cards, using raspi-config to expand the file system, using uncompiled headers, and it keeps failing. I am not sure where to turn next, any and all advice would be greatly appreciated. I would rather be guided to a solution, but I'm not above slipping a Jackson to somebody that can host an image that includes opencv 3.3.

Thanks in advance!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bortek/EZ-WifiBroadcast/issues/71#issuecomment-419311888, or mute the thread https://github.com/notifications/unsubscribe-auth/AhJSA7IPbvxwi8XilcMElrr7Ti7xsMGRks5uYekkgaJpZM4ROkO0 .

pullenrc commented 6 years ago

Thank you for the reply @tbeauchant! I have compiled opencv many times on Raspbian, however, always with a GUI, and always from the /home instead of /root. Would compiling it from the /home dir provide more space for the /tmp dir that opencv needs to compile?

I am still learning the Linux eco system, Thanks again for the help

pullenrc commented 6 years ago

I was successfully able to install opencv 3.3.1 by increasing the size of /tmp mount -o remount,size=5G /tmp/ /tmp was previously 8M, which I guess is not big enough.

During my research I found a thread on the raspi forum of people having the same problem, and their solution was executing apt-get autoremove apt-get clean However, they were using the standard raspian image, for which I have never had this problem.

I can t definitively say which action solved my issue, as I executed both before compilation. I am happy tho, the battle is half won. The other half, piping my output to tx_rawsock, and getting it to work on the other end.

JRicardoPC commented 6 years ago

Hello, I so newbie with raspberry, i try to send a streaming from a thermal camera, but i don’t know how. Reading this thread, i see @tictag have the same hardware as me, can you help me a little bit? Now, I can see the image of the thermal camera using raspbian, without any problem, but when I try to use it with WB it does not work for me, for now I try to learn about gstreaming, and I will try to do more tests. Hardware: UAV: raspberry pi zero or raspberry pi zero W and TP-Link TL-WN722N Ground Station: raspberry pi3 and TP-Link TL-WN722N Picamera and Flir Lepton thermal camera with pure breakout

Thanks.

careyer commented 6 years ago

@Ricardo : the tricky part is that you need to transform your videostream into videoframes that can be recognized by the GroundPi. Just piping a raw videostream through WBC the GroundPi does not recognize where a videoframe starts and ends. Therefore you need to "frame" your video - easiest way to do so (without loosing CPU cycles) is to run the videostream through the H.264 hardware encoder of the Pi. See my code snippets above.

Cheers and good luck. Best regards Thomas

Von meinem iPhone gesendet

Am 20.09.2018 um 14:58 schrieb Ricardo Pérez notifications@github.com:

Hello, I so newbie with raspberry, i try to send a streaming from a thermal camera, but i don’t know how. Reading this thread, i see @tictag have the same hardware as me, can you help me a little bit? Now, I can see the image of the thermal camera using raspbian, without any problem, but when I try to use it with WB it does not work for me, for now I try to learn about gstreaming, and I will try to do more tests.

Thanks.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

JRicardoPC commented 6 years ago

Hello again. @careyer first thanks for your response, I try to make your progress step by step, but now I have a problem when I try to change the size of the image. When I put the command: "resize2fs / dev / mmcblk0p2", it does nothing and appears: "The file system already has 448000 (4k) blocks in length, nothing to do!"

Update: i fixed the problem using gparted in a other computer

Update2: ok, finally i managed to install rpi-source and v4l2loopback, but i have a trouble when execute modprobe v4l2loopback: modprobe: ERROR: could not insert 'v4l2loopback' : Exec format error

careyer commented 6 years ago

@JRicardoPC : That is a common problem i was also facing. The reason is that the kernel module compiles but does not compile for the correct kernel version. Reason is that when loading kernel sources from the internet a "+" char gets added to the version-string as an indicator that it was downloaded from the internet ex post. When trying to start the kernel module it will give you the above error message.

See: https://github.com/bortek/EZ-WifiBroadcast/issues/71#issuecomment-358156241 https://github.com/bortek/EZ-WifiBroadcast/issues/71#issuecomment-358474470

Cheers!

pullenrc commented 6 years ago

Hello again, @careyer I am trying to send an opencv stream to the ground pi, I am trying to do all of my processing on the air side. I have never compiled a kernel before, So it seems a little daunting. If I am understanding the flow properly(please correct me if I am wrong): the kernel needs to be recompiled with the v4l loopback driver, which makes it possible to use the hardware encoder on the air pi to encode to h264 which then gets piped back to the ground pi, and displayed like it was a raspicam? Would you be able to help me out with an outline, so I can set out trying to accomplish this? I have opencv installed on the air pi, however, I have only tried to send video from /dev/video0 to no avail. I don’t get signal strength(on osd) indicating a connection until I go back to the original code.

I am using 1.6RC6. I tried commenting out the line 770( maybe off, but close) where the raspicam is being piped to tax_rawaock, and I commented the v4l lines below, and couldn’t get a picture, or rssi on OSD. I feel like I am getting close, any help would be greatly appreciated.