Closed actionakin closed 2 months ago
This is probably an issue related to Tizen's codec support on 2023 but I can't tell. Any insight into what might be going on would be very appreciated. Today I was seeing the playback make it through the entire 1h45min stream but still had 2000-3000ms 'appending' times.
Some additional notes:
https://cph-p2p-msl.akamaized.net/hls/live/2000341/test/master.m3u8 https://test-streams.mux.dev/x36xhzz/x36xhzz.m3u8 https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8
Hi @actionakin, have you tried isolating the issue to a single variant?
For example, if you only load the m3u8 for avc1.640028 @4Mbps is the issue still reproducing? Is it specific to avc1.640029 @6.4Mbps?
Yes I pinned the level to the lowest bitrate. It lasted a little longer but the issue still occurred.
Please share the config in the description you are testing with. Note that the demo page can be memory intensive. While it is useful for debugging issues, long-form playback should be tested on a page optimized for viewing.
That being said I am curious to know,
enableWorker
to false
in the config)? backBufferLength
to Infinity
or -1
to disable buffer-controller back buffer removal)The issue does not persist in the browser. I've even tried playback in Chrome 94 via SauceLabs and no issues.
Here's our prod config:
{
"nudgeOffset": 0.1,
"maxFragLookUpTolerance": 0.25,
"highBufferWatchdogPeriod": 5,
"nudgeMaxRetry": 10,
"maxBufferLength": 60,
"maxMaxBufferLength": 60,
"capLevelToPlayerSize": false,
"enableWebVTT": true,
"enableCEA708Captions": true,
"liveDurationInfinity": false,
"fragLoadingMaxRetryTimeout": 64000,
"forceKeyFrameOnDiscontinuity": false,
"progressive": false,
"liveSyncDurationCount": 3,
"initialLiveManifestSize": 1,
"abrBandWidthFactor": 0.98,
"abrBandWidthUpFactor": 0.6,
"enableSoftwareAES": false,
"fragLoadingMaxRetry": 8,
"levelLoadingMaxRetry": 6,
"backBufferLength": 60,
"maxBufferSize": 60000000,
"debug": false,
"startLevel": 3,
"startPosition": 0
}
I'll report back shortly disabling the worker / backBufferLength
I set enableWorker: false
and backBufferLength: -1
and 🤞 so far it's holding at around 200-400ms. I'm going to continue to test different videos but this may be our solution.
No luck. I ran it with these configurations in my local build and it started locking up around the 19min mark of playback. I also tried just setting one or the other and I got similar results. Using the demo app, running on the TV, it did seem like I was getting much lower "appending" times but I didn't test for longer than 10min before running tests using our app.
@actionakin I experienced the same issue on our Tizen 2023s. For us disabling worker corrected the problem. However, our back buffer length and max buffer size settings are lower than yours, so maybe it is the combination of that and enableWorker: false
did the trick for us.
You can also try using Samsung's own profiling tool to monitor performance. With your remote control if you enter MUTE 1 1 4 MUTE
or MUTE 1 8 3 MUTE
small, yellow overlay panel with cpu/memory stats should get displayed.
If I am not mistaken, Samsung 2023s are on Tizen 7.0. There are some improvements in Tizen 8.0's release notes. I wonder if that has anything to do with this not being reproducible on 2024, though I might be wrong about that.
@actionakin I have Samsung 2023. Since I've experienced this problem myself in the past and familiar with it, I will see if I can reproduce your issue with your prod config you shared above.
@agajassi You rock. I tried again yesterday and realized that we had some code that was dynamically overriding our backBufferLength and maxBufferSize. I disabled that and hard coded those values as well as enableWorker: false
. I was still experiencing an issue, this time relating to an empty forward buffer. I'll be looking into it more today as well as disabling our custom ABRController
Update: I've now got our app running with the recommended config changes. I realized that I was still testing while running in development mode with the tizen debugger attached. Now everything is running smoothly and video playback is holding strong. I'm still in the middle of running long tests but it's looking very promising.
Ohh yes, forgot to mention that you can't have Debugger open for this test, 'cos it adds its own overhead.. If I need the logs for whatever reason, I usually just display them on the screen and close debugger. Glad to hear that it is holding up so far. Let us know if this resolves your issues.
Also keep in mind, that @robwalch optimized Web Worker use and that PR should get released with v1.6.0. You might want to consider upgrading to that release as well later.
@agajassi Noted about v1.6.0. So far I've played video for about 45 min and I'm seeing steady memory usage. This is the best result I've had in 2 weeks ;)
Touchdown! We updated the config with
{
backBufferLength: -1,
enableWorker: false,
maxBufferSize: 30000000,
}
Everything is playing smoothly now. Thanks for all the help @agajassi and @robwalch
What version of Hls.js are you using?
1.5.13
What browser (including version) are you using?
Chromium m94
What OS (including version) are you using?
Samsung Tizen 7.0
Test stream
https://mlb-cuts-diamond.mlb.com/FORGE/2024/2024-02/17/981e4101-03fa31e9-8a73f8e8-csvm-diamondx64-asset.m3u8
Configuration
Additional player setup steps
I'm using the latest hls.js demo app with the src param set in the url.
Checklist
Steps to reproduce
Expected behaviour
The video should play without interruption
What actually happened?
The "appending" time under "load event" section climbs to 5000ms+, eventually the whole app grinds to a halt and crashes.
Console output
Chrome media internals output
No response