Closed huangsherry closed 7 years ago
you need to modify the MP4Client config, eg:
MP4Client -opt Network:BufferLength=100 -opt Network:LowLatencyBufferMax=0 -opt DASH:LowLatency=chunk -opt DASH:BufferingMode=none -opt DASH:UseServerUTC=yes $URL
If you are using MP4Box as a live source, you may want to spefiy -frag-rt to simulate real-time sending of DASH fragments
Thanks, I am using the Dashcast with the following sentence. DashCast -vf dshow -vres 640x480 -vfr 30 -v video="Integrated Webcam" -live -low-delay -frag 200 -insert-utc -seg-marker eods -min-buffer 0.2 -ast-offset -800 -pixf yuv420p
node gpac-dash.js -segment-marker eods -chunk-media-segments
Is there anything I need to adjust here?
BTW, how did you measure the latency?
no I think your fine with your server and encoder settings The latency is measured using the dash and media log tools (-logs dash:media@debug)
Thanks, now the latency is around 1s. Is there anyway I can short the latency to 240ms?
I print the log information, but I did not know which one reflect the correct latency. I found a sentence : "[ODM1 (http://127.0.0.1:8000/output/dashcast.mpd)] Frame TS 192033 NTP diff with sender 670 ms" Is this the measured latency?
the log you find is the total latency from frame capture time to frame display time, however it does not reflect any potential NTP mismatch between the two machines (obviously not relevant if you have a single machine)
you need to make sur you fall on the live edge of the server, ie that you see chunks being pushed, not just the segment. For that run MP4Client with -logs network@info, you should see chunks detail
Many thanks, I am running it in the same computer, so the NTP mismatch should not be a problem. Do you know how should I set the Dashcast to further reduce the latency to 240ms? I attached a part of log of network@info. I think I am on the live edge of the server, right? I did not find out what is the limitation to the latency
`[HTTP] HTTP/1.1 200 OK Content-Type: application/octet-stream Server-UTC: 1497928280298 Date: Tue, 20 Jun 2017 03:11:20 GMT Connection: keep-alive Transfer-Encoding: chunked
[CACHE] Opening cache file gmem://0@0x0 for write (http://127.0.0.1:8000/output/v1_39_gpac.m4s)... [HTTP] bandwidth estimation: runtime 1 (chunk runtime 0) ms - bytes 31798 - rate 254384 kbps [iso file] Warning: TFDT timing 38033334 less than cumulated timing 38033354 - using tfdt [HTTP] bandwidth estimation: runtime 1 (chunk runtime 0) ms - bytes 51629 - rate 413032 kbps [HTTP] bandwidth estimation: runtime 1 (chunk runtime 0) ms - bytes 76320 - rate 610560 kbps [HTTP] bandwidth estimation: runtime 1 (chunk runtime 0) ms - bytes 101228 - rate 809824 kbps [HTTP] bandwidth estimation: runtime 1 (chunk runtime 1) ms - bytes 126435 - rate 1011480 kbps [HTTP] bandwidth estimation: runtime 1 (chunk runtime 1) ms - bytes 126443 - rate 1011544 kbps [iso file] Unknown box type eods [HTTP] url http://127.0.0.1:8000/output/v1_39_gpac.m4s downloaded in 549 us (987835 kbps) (528628 us since request - got response in 907 us) [CACHE] Requesting deletion for http://127.0.0.1:8000/output/v1_38_gpac.m4s [Downloader] gf_dm_configure_cache(0x7fc0db00ae00), cached=yes [CACHE] Cache setup to 0x7fc0db00ae00 /tmp/gpac_cache_F736D55207031887991C674E4E43A84729E3723D.mpd [HTTP] Sending request at UTC 1497928280828 GET /output/dashcast.mpd HTTP/1.1 Host: 127.0.0.1 User-Agent: GPAC/0.7.2-DEV-rev104-g9363564-master Accept: / Connection: Keep-Alive Accept-Language: en Icy-Metadata: 1`
I was also reading the paper "Overhead and performance of low latency live streaming using MPEG-DASH". In this paper, it mentioned that the latency it can achieve is around 240ms.
I have tested this repo with the given parameters on my laptop. It seems that the best latency is around 2s with the given parameters in your wiki.
Do you know how should I set the paras to achieve 240ms delay?
Thanks.