AlexxIT / go2rtc

Ultimate camera streaming application with support RTSP, RTMP, HTTP-FLV, WebRTC, MSE, HLS, MP4, MJPEG, HomeKit, FFmpeg, etc.
https://github.com/AlexxIT/Blog
MIT License
4.24k stars 345 forks source link

Question: add stream name and url on the fly #134

Closed fuatakgun closed 1 year ago

fuatakgun commented 1 year ago

thanks again for your great work with webrtc custom card and this add-on.

Apparently, there is an endpoint working for Home Assistant integration to add stream (with names and URLs) on the fly.

What is the API endpoint for the same process? I would like to add cameras on the fly like this;

name: camera_name source: ffmpeg -i tcp:ip:port -c copy -rtsp_transport tcp -f rtsp {output}

I am receiving video bytes inside home assistant through eufy security custom integration and I want to send these bytes to respective tcp port and let go2rtc handle ffmpeg process and generate the rtsp.

currently, I am setting up tcp server, creating a ffmpeg process to connect to tcp server and publish to rtsp-simple-server for rtsp streaming. I want to get rid of ffmpeg process management.

NickM-27 commented 1 year ago

You can see how we are using it in frigate: https://github.com/blakeblackshear/frigate/blob/b1ec56de29b7ee0017954b0b4a841ac06def068b/frigate/restream.py#L47

fuatakgun commented 1 year ago

Thanks a lot, let me check.

AlexxIT commented 1 year ago

How ffmpeg can produce RTSP from TCP? It should know codec. It can't know codec from your example

ffmpeg -i tcp:ip:port -c copy -rtsp_transport tcp -f rtsp {output}
fuatakgun commented 1 year ago

You are right, I missed the important stuff and focused on my question.

I will be sending this

ffmpeg -f h264 -i tcp:ip:port -c copy -rtsp_transport tcp -f rtsp {output}

When I call the streams api with PUT, would it update the existing one, in case there is one exists? or I have to call DELETE first and PUT afterwards?

As a follow up question, can we set ffmpeg as tcp server and can I connect to server and send the data to it (without me setting up the server)? As of now, I am setting up tcp server, ffmpeg connects to it and client handler sends the data to ffmpeg.

AlexxIT commented 1 year ago

Put will overwrite stream with new object. This is not good idea if you have active connections on current stream. Them continue to work, but you can't see it in streams list.

I think I can fix it in the future, but better to have issue about this problem.

  1. Please describe full path of video data.
fuatakgun commented 1 year ago

Current overriding is good for me, as i will probably write it once on every restart of HA.

About ffmpeg, tcp server, i am receiving video data as bytes over a websocket connection so there is no path.

I had used ffmpeg with pipe ar first but it was very unreliable.

Then started ffmpeg with tcp, but handling ffmpeg process inside home assistant is very tricky.

If i can outsource handling ffmpeg + tcp server and just write the bytes to server, life would be easier.

Just a side note, these devices are not streaming all the time, they are running on demand to preserve battery life.

AlexxIT commented 1 year ago

Do you have two tracks for video and audio? Do you know what inside video data? This is H264 Annex-B stream? Can you send some dumps?

Now I plan to build in support for receiving MJPEG data via TCP. Maybe I should add support for your data. This support will be built into go2rtc. There will be no need to use FFmpeg.

fuatakgun commented 1 year ago

I was thingking something like this:

command = f"exec: ffmpeg -re -y -f {video_codec} -i tcp://{ip}:{port} -vcodec copy -rtsp_transport tcp -f rtsp {output}"
AlexxIT commented 1 year ago

Interesting idea. Who is init connection?

  1. User requests RTSP or WebRTC stream from go2rtc
  2. go2rtc ping your software, for example with echo source, and run exec/ffmpeg with your example (but from echo source)
  3. Your software connects to websocket and start pushing frames to ffmpeg

Or maybe:

  1. Some action???
  2. Your software connects to websocket and add exec/ffmpeg stream to go2rtc...

But go2rtc will not run ffmpeg if no one is going to watch it...

fuatakgun commented 1 year ago

you are right, my proposal seems to be problematic given if user does not start watching the stream in timely manner, frames (bytes) will not be written to ffmpeg, right?

any ideas or suggestions other than initiating the stream automatically?

AlexxIT commented 1 year ago

But why some one needs stream that not viewed by anyone? :)

But anyway. go2rtc support "external" streams. You need to create stream with empty source and later push data to it with RTSP. You need to start ffmpeg manually, not form go2rtc.

streams:
  camera1:
# ffmpeg -re -f {video_codec} -i tcp://{ip}:{port} -vcodec copy -rtsp_transport tcp -f rtsp rtsp://localhost:8554/camera1

PS. -f it is format, not codec. What values you plan to use there?

fuatakgun commented 1 year ago

I really do not want to manage the ffmpeg process from python, it is bringing me to the initial stage (using rtsp simple server)

if i start the stream after setting the api/stream with camera and exec code, would it work?

For codec, i will use h264 or h265

AlexxIT commented 1 year ago

go2rtc won't run exec/ffmpeg just after you add stream via API. Someone needs to request this stream from go2rtc via RTSP or MSE or WebRTC... And go2rtc will stop ffmpeg when client will stop viewing stream.

Maybe I can add new WebSocket API to go2rtc for receiving frames via WebSocket. This does not require ffmpeg at all. But I need some kind of packet dump that your software gets from the cloud WebSocket.

Or you can give me remote access to one of these cameras. As I can understand we're talking about https://github.com/fuatakgun/eufy_security ?

fuatakgun commented 1 year ago

yes, you are right, it is the repository. there is another developer who is extracting the bytes from another docker (eufy-security-ws) and sending the bytes over websocket. I am forwarding incoming bytes to RTSP server and giving back the RTSP URL to home assistant, so users can play live stream from their cameras.

Initial test setup, which is perfectly working fine.

Testing with go2rtc;

It is generating broken_pipe error, I was expecting this to work out but it is failing, here are my console output;

- console 1:

ffmpeg version 2022-07-18-git-cb22d5ea3c-essentials_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
  libavutil      57. 29.100 / 57. 29.100
  libavcodec     59. 39.100 / 59. 39.100
  libavformat    59. 29.100 / 59. 29.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 46.100 /  8. 46.100
  libswscale      6.  8.100 /  6.  8.100
  libswresample   4.  8.100 /  4.  8.100
  libpostproc    56.  7.100 / 56.  7.100
Splitting the commandline.
Reading option '-re' ... matched as option 're' (read input at native frame rate; equivalent to -readrate 1) with argument '1'.
Reading option '-i' ... matched as input url with argument 'bunny.mp4'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'copy'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mpegts'.
Reading option 'tcp://192.168.178.119:1234?listen' ... matched as output url.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-g' ... matched as AVOption 'g' with argument '15'.
Trailing option(s) found in the command: may be ignored.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Successfully parsed a group of options.
Parsing a group of options: input url bunny.mp4.
Applying option re (read input at native frame rate; equivalent to -readrate 1) with argument 1.
Successfully parsed a group of options.
Opening an input file: bunny.mp4.
[NULL @ 000001d5b327ebc0] Opening 'bunny.mp4' for reading
[file @ 000001d5b327b800] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] Processing st: 0, edit list 0 - media time: 1024, duration: 153600
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] Offset DTS by 1024 to make first pts zero.
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] Setting codecpar->delay to 2 for stream st: 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] Before avformat_find_stream_info() pos: 4502 bytes read:32768 seeks:0 nb_streams:1
[h264 @ 000001d5b328f040] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 000001d5b328f040] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 000001d5b328f040] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 000001d5b328f040] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 000001d5b328f040] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 000001d5b328f040] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 000001d5b328f040] Format yuv420p chosen by get_format().
[h264 @ 000001d5b328f040] Reinit context to 1280x720, pix_fmt: yuv420p
[h264 @ 000001d5b328f040] no picture
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d5b327ebc0] After avformat_find_stream_info() pos: 591656 bytes read:591656 seeks:0 frames:1
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bunny.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    title           : Big Buck Bunny, Sunflower version
    artist          : Blender Foundation 2008, Janus Bager Kristensen 2013
    composer        : Sacha Goedegebure
    encoder         : Lavf57.63.100
    comment         : Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net
    genre           : Animation
  Duration: 00:00:10.00, start: 0.000000, bitrate: 25657 kb/s
  Stream #0:0[0x1](und), 1, 1/15360: Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(progressive, left), 1280x720 [SAR 1:1 DAR 16:9], 0/1, 25653 kb/s, 30 fps, 30 tbr, 15360 tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
Successfully opened the file.
Applying option f (force format) with argument mpegts.
Successfully parsed a group of options.
Opening an output file: tcp://192.168.178.119:1234?listen.
[tcp @ 000001d5b3701f40] No default whitelist set
Successfully opened the file.
[mpegts @ 000001d5b3285180] service 1 using PCR in pid=256, pcr_period=100ms
[mpegts @ 000001d5b3285180] muxrate VBR, sdt every 500 ms, pat/pmt every 100 ms
Output #0, mpegts, to 'tcp://192.168.178.119:1234?listen':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    title           : Big Buck Bunny, Sunflower version
    artist          : Blender Foundation 2008, Janus Bager Kristensen 2013
    composer        : Sacha Goedegebure
    genre           : Animation
    comment         : Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net
    encoder         : Lavf59.29.100
  Stream #0:0(und), 0, 1/90000: Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(progressive, left), 1280x720 (0x0) [SAR 1:1 DAR 16:9], 0/1, q=2-31, 25653 kb/s, 30 fps, 30 tbr, 90k tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
Automatically inserted bitstream filter 'h264_mp4toannexb'; args=''
av_interleaved_write_frame(): Unknown errortime=00:00:02.06 bitrate=25839.8kbits/s speed=1.01x
No more output streams to write to, finishing.
Error writing trailer of tcp://192.168.178.119:1234?listen: Error number -10053 occurred
frame=   80 fps= 31 q=-1.0 Lsize=    8468kB time=00:00:02.56 bitrate=27027.5kbits/s speed=1.01x
video:8276kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.322037%
Input file #0 (bunny.mp4):
  Input stream #0:0 (video): 80 packets read (8474585 bytes);
  Total: 80 packets (8474585 bytes) demuxed
Output file #0 (tcp://192.168.178.119:1234?listen):
  Output stream #0:0 (video): 80 packets muxed (8474585 bytes);
  Total: 80 packets (8474585 bytes) muxed
[AVIOContext @ 000001d5b3ab48c0] Statistics: 8671368 bytes written, 0 seeks, 306 writeouts
Error closing file tcp://192.168.178.119:1234?listen: Error number -10053 occurred
0 frames successfully decoded, 0 decoding errors
[AVIOContext @ 000001d5b327c300] Statistics: 8479087 bytes read, 0 seeks
Conversion failed!
AlexxIT commented 1 year ago

Your example works for me. But only once. When user stop watch stream - go2rtc will close ffmpeg. And second ffmpeg also will fail.

# ffmpeg -re -i bbb.mp4 -c:v copy -f mpegts tcp://localhost:1234?listen
streams:
   cam1: exec:ffmpeg -re -i tcp://localhost:1234 -c:v copy -rtsp_transport tcp -f rtsp {output}