Closed dvsingletary closed 6 years ago
sigh... this doesn't work 100% of the time. Looks more like a bug in the ffmpeg library. use rtsp_transport=tcp to force tcp instead.
What is the difference between RTP/Unicast and RTP/RTSP ? I tried your suggestion and so far it seems to be working. Thanks!
ah... you know what, according to the wiki, RTP/RTSP is basically the same thing as RTP/Unicast with rtsp_transport=tcp. Wiki says "Unicast uses udp, rtp/rtsp uses tcp and transports over rtsp port, and rtp/rtsp/http does the same but over the http port.".... so although RTP/RTSP is an alternative solution to the problem, it still doesn't fix the underlying bug regarding udp streams received with the ffmpeg library.
Yes was just letting you know how to set TCP without using Options field as that drop down is controlling it. Issue is most likely here https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/udp.c. Most stuff I have seen points to the introduction of the circular buffer here as when smearing issues started. Others also talk about some OS buffers, but have not had time to track it down. But if I turn on UDP I get smearing instantly.
Yup - same thing happens for me too. I've seen other remarks pointing at libavformat/udp.c also. For now, TCP works fine and on a gigabit wired network there's not a huge motivation to use UDP - no real gains that I'm aware of. Probably why it hasn't been fixed =)
On 4/15/2015 2:17 PM, Steve Gilvarry wrote:
Yes was just letting you know how to set TCP without using Options field as that drop down is controlling it. Issue is most likely here https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/udp.c. Most stuff I have seen points to the introduction of the circular buffer here as when smearing issues started. Others also talk about some OS buffers, but have not had time to track it down. But if I turn on UDP I get smearing instantly.
— Reply to this email directly or view it on GitHub https://github.com/ZoneMinder/ZoneMinder/issues/811#issuecomment-93572094.
Going to close this out since the discussion seems to have ended.
Thanks to a little project I keep an eye on I finally got around to testing this some more. I saw this little snippet a while back and made the change last week. Then it took me a while to work out that my shared network to my dev VM doesn't like UDP (Routing across subnets??). Going to monitor it for a while then try on my main machine. https://github.com/ua-i2cat/liveMediaStreamer/wiki/Deployment-guide#deployment
Moreover, UDP kernel buffers MUST be modified in order to support high bit rate data (for example Full HD video).
sudo sysctl -w net.core.rmem_max=2097152
sudo sysctl -w net.core.rmem_default=2097152
sudo sysctl -w net.core.wmem_max=2097152
sudo sysctl -w net.core.wmem_default=2097152
Using FFplay was helpful to confirm it could work with ffmpeg then was just a case of replicating.
ffplay -max_delay 500000 -rtsp_transport udp -v trace "rtsp://user:pass@CamIP:Port/URL"
Have not managed to get udp_multicast option to work in ffplay or zoneminder yet.
Posting this here to see if it helps anyone else who can reproduce smearing on demand.
@SteveGilvarry would you mind explaining the difference between your solution (increasing the kernel's UDP buffer) and the option mentioned by @dvsingletary (to set the ffmpeg "buffer_size" parameter)?
Also, any idea why the kernel's UDP buffer makes a difference? Shouldn't the kernel be delivering the UDP datagrams to ffmpeg, and leaving the responsibility to ffmpeg to wait until it has received an entire video frame before rendering it on screen? I would appreciate it if you could share your understanding of the issue, and whether or not a fix is possible (or if the fix is to just increase the OS buffer as you've done).
Also, do you have any guidelines for setting an appropriate buffer size, based on the bit rate, resolution, and frame rate of the video?
Edit: From this page, it seems like the "buffer_size" parameter is supported for udp:// URLs, but not for rtsp://. Could that be the root of the problem? https://www.ffmpeg.org/ffmpeg-protocols.html#udp https://www.ffmpeg.org/ffmpeg-protocols.html#rtsp
@falaca I don't pretend to fully understand it, but here is my current theory. It may well be a combination of both values that fixes it, VLC and LIVE555 talk about buffer size. But they reference net.core.rem_max as an OS level limitation that will prevent the buffer size from working when you do increase it. http://www.live555.com/liveMedia/faq.html#packet-loss Note size wise VLC and LIVE555 are both defaulting buffer size to 2000000.
Using net.core.rem_default I believe bypasses the buffer_size option by telling the OS to default to this larger size buffer. This may have unintended consequences for server ram usage as each connection will default to the larger size.
Reflecting back on it we don't need to play with wmem values, it just happens the project that put me onto this does read and write streams.
Subsequently a good test could be increase buffer_size like @dvsingletary suggested, and if it is still not perfect as he observed then increase net.core.rem_max sudo sysctl -w net.core.rmem_max=2097152 and see if that fixes it.
I really think more testing is needed to clarify exactly what the root cause is, a couple of options I can think of:
My camera seems to have stopped doing it, but then I just remembered that I lowered FPS down to 5 from 30 on a 1080p camera. So I suspect that just lower data rate is helping mine. I will go back to 30fps and full 3mp resolution and see if I can test some more.
You might want to implement a CLI option for rtsp_transport option as discussed in the referenced omxplayer thread as that is how most of our users are working around this issue, assuming your streams offer TCP like most cameras do.
Results of my experiments today. Ran netstat -c --udp -an so I could see what the UDP rx queue was doing. Upped my camera to 1080p at 25fps, it's max. Upped the bitrate to 8192kbps. Doing this I could see rx queue jumping up from 0, but not that far, and I wasn't getting any events of the full smearing. Kind of disappointing as I used to get it all the time using UDP. Decided to see what I could do to get it to smear. Setup another monitor to the same URL, and set both on Motion detect. Still happy campers and no smearing. Ran stress --cpu 32 --io 8 --vm 4 --vm-bytes 128M --timeout 180s to put some extra load on this VM.
That seemed to do the trick and it was starting to smear, rx_queue was jumping up to 190000 and 200000, ubuntu default is ~212000.
So I now had Monitor-1 and Monitor-3 both on UDP and both smearing when I stressed the box. http://imgur.com/oiJH4cp sudo sysctl -w net.core.rmem_max=2000000 sudo sysctl -p Restarted ZM Still getting smearing on both monitors Added buffer_size=2000000 to the ffmpeg options on Monitor-1, so it should now buffer more. Restarted ZM Now when it was being stressed I could not get it to smear on Monitor-1, but Monitor-3 was breaking up all over the place still.
So my current recommendation would be to try rmem_max and buffer_size together as that will limit the buffer bloat to just ffmpeg process.
If some more people getting the issue what to chip in. I have not found a way to log the maximum rx_queue values but the netstat command allows you to get some idea what is going on and if it correlates to a smear in the video. It is easy for people getting consistent smearing to test this, not good for the once a day type of issue.
Another option to experiment with is fifo_size, as per streaming client discussion from ffmpeg.
Alternatively, increase your buffer size, like mplayer ffmpeg://udp://host:port?buffer_size=10000000 (the default is system dependent and typically far too low for any reasonable buffering. On linux though you can only set it to like 200K max anyway, so this isn't good enough--make sure to use the circular buffer, and that the following works: ffmpeg://udp://host:port?buffer_size=10000000?fifo_size=100000 (the fifo_size should not emit a warning, and implies that you have a secondary thread that collects incoming packets for you if there is no warning).
@SteveGilvarry thanks for the information. I'll try to test it out when I have time, but it may be a while before I get to it. omxplayer doesn't expose the buffer_size option (so I would need to make a custom build), but it does expose fifo_size.
I have another idea that may sound silly but I think is worthwhile. I'd like to try either 1) restreaming a remote UDP feed to an RTMP server running on the local machine (e.g., nginx rtmp module) and then playing that with omxplayer/ffmpeg, or 2) find a way to tunnel a remote RTSP UDP stream through a local TCP socket, and then try playing that. If the problem is at the application layer with ffmpeg, I think that doing this may confirm it (if the problem is resolved).
Option 2 seems like the easier one to try. After a quick Google search, I found a tool called udptunnel, so that (maybe in addition to increasing the kernel's UDP buffer size) might do the trick. It seems like netcat can do this too?
And by the way, the main reason why I'd like to use UDP over TCP is for multicasting. So while switching to TCP was a nice temporary fix, in the long run I'd like to switch over to UDP.
@SteveGilvarry I was able to resolve my issue by setting buffer_size, fifo_size, and also increasing the kernel's UDP receive buffer.
The way it works is that a separate thread copies the received data from the UDP socket receive buffer into the circular buffer, and the size of that buffer is set by fifo_size. I asked the ffmpeg developers on IRC for the purpose of the circular buffer, and basically they said it's a hack to avoid blocking, "because otherwise ffmpeg receives udp packets in the same thread as everything else". If you compile ffmpeg with pthreads disabled, it also disables the circular buffer. It should be possible to rewrite the code with a non-blocking UDP socket, and that would probably be a better solution... If anybody is willing to spend the time on it :)
I've just experienced the ffmpeg smear for the first time. I had no idea what it was until I started googling around for vertically distorted images, but there are lots of things on the internet calling it the ffmpeg smear, so that's how I found it.
I've already found and fixed my problem, I just wanted to post this here incase it helps anyone else. I don't expect this is going to be the problem for everyone, but it's worth checking.
My cameras are a couple of hikvisions, one indoor and one outdoor, and had been running fine for a few months. Then I did a reshuffle of my network (I'm a cisco network tech by trade) and my outdoor camera started playing up, recording vertically distorted images. Here is an example screenshot from video, but it got a lot worse: http://i.imgur.com/3Ge2aIj.png
It only started happening when I made some changes to my network so I had an inclining that it was network-related. Anyway, after sniffing around, I found a network issue. The switchport that my camera was connected to had fallen back to half duplex and my camera was on 100/full. I could see errors and collisions incrementing on the switch port. I set my switch's port back to 100/full duplex hard coded (to match the camera) and have been keeping an eye on it for a couple weeks and my smearing problem is gone.
To summarise.. I suspect that what you're seeing with the UDP stuff mentioned above will tie in to a potential network problem, not necessarily an ffmpeg problem. Or at least making the problem worse if there is also an ffmpeg problem. UDP is a connectionless protocol.. it will just push data across a network with no verification that data got to the other end ok. If there are network issues, the sending device won't have any idea but you'll see problems on the receiving end (AKA: the video). TCP is a connection-oriented protocol, will verify that the data made it ok and if it didn't, it will take corrective measures to try and make it work.
So by using UDP you're actually bringing out issues in the network. (Interesting side note but UDP is actually used to stress-test networks to intentionally expose any potential network problems).
If switching to TCP is fixing some peoples' problems then I'd suggest anyone should check if they have a network issue rather than a camera issue. Admittedly that may be difficult to do if you're using residential-grade gear but if you have a managed switch, log on to it and take a look.
I am having this same issue with some cheap chinese IP cams, even though I'm using TCP. I don't seem to have this issue with the software my client is currently using, however (Video Insight).
I've used Wireshark to get some packet captures for both VMs, and I've noticed that VI is sending back RTCP packets occasionally, followed by another TCP packet. FFMPEG never does this. I have the pcapng file if it helps.
I'll also note I'm using @connortechnology 's storageareas branch via his PPA.
I'd really like to find a reliable setup for this, because I'm hoping to pick up a number of other warehouses as clients, and I want to make Zoneminder a key part of the business model.
Same issue as @chickenSniffer here. My network surely is unreliable because I user powerline (Devolo) connections and UDP packets may be late, lost or in wrong order. That shouldn't mess up ffmpeg in my opinion. No combination of ffmpeg parameters fixed it for me. I have more success with the deprecated Remote / RTP/RTSP setting even though the connection sometimes gets lost completely and only a Zoneminder restart fixes it. Not sure if it's a matter of ffmpeg or Zoneminder but the current implementation is very unreliable here.
Libvlc worked a little although the colors of my images were pretty crazy. Red became blue but green stayed green.
For rtsp source ffmpeg supports only buffer_size (via command-line option), it does not support fifo_size parameter. Does fifo_size parameter really needed too for solving this stream artifact problem? I am playing with decreasing network buffers in system (via sysctl rmem_max, rmem_default, wmem_max, wmem_default) and ffmpeg buffer (via buffer_size command line option before input option). I see more artifacts, when decreases buffers in any of these cases. But when I increase these buffers, I still see artifacts sometimes. I try to recompile ffmpeg (libavformat/udp.c) with `
` , but it changes nothing visible. I also tried to use cvlc (live555). Yes, it shows not so obvious artifacts, but I still see some artifacts or missed frames. When I set ffmpeg rtsp_transport option to tcp, I see no artifacts, but I have missed frames in live srteam, because of more resources are needed. I am sorry, I don't use Zoneminder now (I used it in past), but google search return this issue in top on such queries about this problem.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Anyone make any further progress on this topic, new bins seem to suffer from the same issue.
same problem no solution .... a lot of smearing on my cameras using UDP
Same issue here, tested with different cameras. FFplay works much better, so IMHO there must be a solution!?
This is not a problem of ZoneMinder or FFMPEG or any other software. After much research. It is inherent from UDP packets that happen to be lost. You should almost never use it. You are missing knowledge about your camera if you can't setup RTSP TCP. Do more research here is not your solution. That's why is also closed.
This is not a problem of ZoneMinder or FFMPEG or any other software. After much research. It is inherent from UDP packets that happen to be lost. You should almost never use it. You are missing knowledge about your camera if you can't setup RTSP TCP. Do more research here is not your solution. That's why is also closed.
It is not always possible to use wider channel and change camera latency, especially when you are streaming from multiple cameras, parallel (10-100 cameras). So rtsp_transport=tcp
may produce stream with big delay.
You can create documentation about patching ffmpeg (FFmpeg/libavformat/udp.c) and linux sysctl net.core ... memory for using UDP stream.
Posting this here because I've seen many people comment on this problem online wrt ffmpeg and rtsp streaming over udp. There isn't a single solution to this problem online that I could find. After reading the ffmpeg documentation I stumbled upon the solution. This might be a good thing to add to the wiki under troubleshooting:
By default the udp buffer size that ffmpeg uses is not large enough to hold an entire frame's worth of data for an HD image at a reasonable bitrate. The symptom is that the received image will be smeared down from some point vertically. It may not happen on all frames. This can be fixed by using the "buffer_size" option for ffmpeg. In the "Source" tab of source setup, add "buffer_size=128000" to the Options field. That seems to work ok for a 2Mbps feed, 5fps, color, at 720p. Increase if you're still getting smeared images.
Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.