Open nodegin opened 6 years ago
Hi @nodegin,
Would you be able to try compiling your own contrib-hls version? We've got a preliminary PR for support with #1416, which also depends on videojs/videojs-contrib-media-sources#178. I briefly described how to do it for someone else here, and the patched build worked for them. I'd be interested in knowing if it works for you, too.
Hi @squarebracket,
I tried to compile by following your instruction stated in #1416, however I still facing the same issue, not sure if my compilation went wrong? Can you send me any patched and precompiled js file?
Do you have a test stream I could try?
@squarebracket Hi, you can use this stream to test:
Seems to be working for me here. If you don't know how to apply PRs locally, see here. If you message me on the videojs slack I can send you a compiled js file, but github doesn't accept attaching js files.
Can you email the script to me? Since I didn't use slack.. My address is o@noooo.ooo
I finally built my working version which solved this issue and merged with latest commits.
videojs-contrib-hls from PR #1242 merged with master 122c7897 videojs-contrib-media-sources from PR #178 merged with master 9849189a
For those looking for .js file:
I finally built my working version which solved this issue and merged with latest commits.
videojs-contrib-hls from PR #1242 merged with master 122c789 videojs-contrib-media-sources from PR #178 merged with master 9849189a
For those looking for .js file:
hi @nodegin I am searching How can build it? If it is convenient to you,Can you write your code? Thanks!
I just built by follow the instructions
2018年10月12日(金) 14:17 lyp82n notifications@github.com:
I finally built my working version which solved this issue and merged with latest commits.
videojs-contrib-hls from PR #1242 https://github.com/videojs/videojs-contrib-hls/pull/1242 merged with master 122c789 https://github.com/videojs/videojs-contrib-hls/commit/122c7897ed6184416b22090f6196dad562e5b5d2 videojs-contrib-media-sources from PR #178 https://github.com/videojs/videojs-contrib-hls/pull/178 merged with master 9849189a
For those looking for .js file:
videojs-contrib-hls.min.js.zip https://github.com/videojs/videojs-contrib-hls/files/2238295/videojs-contrib-hls.min.js.zip
hi nodegin I am searching How can build it? If it is convenient to you,Can you write your code? Thanks!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/videojs/videojs-contrib-hls/issues/1435#issuecomment-429216399, or mute the thread https://github.com/notifications/unsubscribe-auth/AIJAtK3JmgocczzzeX1xeaTd_0Q32gL4ks5ukDPzgaJpZM4U-C6D .
video.js:128 VIDEOJS: ERROR: (CODE:-3 undefined) Failed to execute 'appendBuffer' on 'SourceBuffer': The SourceBuffer is full, and cannot free space to append additional buffers. MediaError
logByType @ video.js:128
2 hours long high bitrate 1080p stream fails after some time. Does this in Chrome and Firefox. Please fix!
This issue also applies to DASH playback and is incredibly annoying when dealing with high bitrate streams as it will happen within a few seconds.
Why can't 20MB segments be handled?
Having same issue on Chrome.
Related: #videojs/video.js/issues/6458
We plan on fixing it next quarter hopefully. The reason why it's a problem is because the forward buffer in MSE is quite small, and we don't handle the QuotaErrorExceeded error. So, if we have a full back buffer, and we try to add a new large segment, we run out of space in the sourcebuffer. It's definitely something that can be addressed but it may be more involved than we think it could be.
I've also run into this issue. Has anybody that previous ran into this issue get a workaround running?
For now we are segmenting our media into shorter chunks to (at least) work around the > 20 MB issue, but we've still got occasional issues.
I'm thinking about tackling this one. Having an issue whilst running videojs on a chromecast, with a tiny 30Mb SourceBuffer.
@gkatsev - could we perhaps simply reference the head of the queue in this instance instead of splicing it off. At that point, if the append action throws a QuotaExceededError exception, we can simply re-try it on the next queue run. If it succeeds, we can then proceed with popping off the head.
Would something like this work? An alternative idea I'm thinking about is to create some form of buffer-byte-size-tracking implementation, which allows us to limit the data we've placed into the source buffer at any given point, only appending if it's smaller than the configured value.
Hey! We've detected some video files in a comment on this issue. If you'd like to permanently archive these videos and tie them to this project, a maintainer of the project can reply to this issue with the following commands:
Hey @rhyswilliamsza , thanks for offering to take this on! We really appreciate it.
Tracking the bytes in the buffer can work, but also can lead to some misinformation. For instance, if we switch renditions and end up with overlapping content, then the segment's total bytes may need to be divided by added duration of content, and that estimate might not be accurate. We'd also have to differentiate between audio and video buffers. I think this approach may be good as a potential optimization to avoid re-attempts, but for now we may be better off just re-attempting.
If you have any thoughts on different approaches, let me know, but I was thinking a bit about it, and one approach we can take is to just block until the append succeeds as part of our queue clearing, and have a back buffer trimmer acting in-between.
A buffer trimmer can be provided to the source updaters, a part of the source updater, or can be part of the segment-loaders or master-playlist-controller. We'd want the source-updater to continue its normal operations, but catch an exception for quota exceeded, and, as you said, either not remove that action from the head of the queue and fire events indicating buffer full (for the segment-loaders or master-playlist-controller to act on), or start a separate procedure to try to resolve the situation.
The procedure might look something like:
The repeats can either be handled internally (to source-updater), or the source-updater can be blocked and a method called to try to resume after an outside module handles some clearing.
Those were my initial thoughts, but I'm interested to hear what you think, or if you have any ideas on other approaches.
And thank you again!
Awesome. I agree that the 'attempt and re-attempt' approach is probably easiest for now. Do you think it's necessary to implement a new procedure for this, though? With the back buffer processes already clearing old source buffer timeranges, would it not be simpler to re-process the head of the queue repeatedly until it succeeds? I haven't actually looked that closely yet (this will obviously break if the clearing action is also a queue item).
Another thing which you'd defos have the expertise on: will the master–playlist-controller continue fetching new content to add to the queue regardless of whether the queue size is reducing, or will it wait until we append some data. Perhaps this is what you meant by firing events to the playlist controller. My concern is that if we stop appending data, will our master-playlist-controller continue downloading and bringing chunks into memory?
EDIT: We could also leverage the source updater's updating
property? This is already used to choke the filling of the buffer. Perhaps we could simply set up the logic to keep it 'true' whilst the chunk fails to append, and thereafter set it back to 'false' once the chunk appends successfully. Just throwing around ideas.
I think both of the questions (trimming back buffer and loading extra content while blocked from appending buffer) may have the same answer.
segment-loader will not load extra segments until its current segment has completed processing: https://github.com/videojs/http-streaming/blob/6c337e18fc009ae2e201af4b3816898bafd2c3b1/src/segment-loader.js#L2421
So if the source-updater is either re-processing, or just not calling the updateend
callback, the segment-loaders will be "paused." This can be a good thing, as it should avoid the problem you mentioned around downloading extra content and filling local memory while blocked on buffer appends.
But segment-loader is also responsible for trimming the back buffer: https://github.com/videojs/http-streaming/blob/6c337e18fc009ae2e201af4b3816898bafd2c3b1/src/segment-loader.js#L2124
So if source-updater is re-processing the queue and not calling the callback for the append to complete, then segment-loader won't load new segments, but also won't trim any back buffer.
There are a few ways we can go about it:
I think it might make sense to give source-updater some more of the control here (i.e., something around solution 2). Allow it to do its own monitoring, trimming, and re-appending, as it can centralize the logic and source-updater should be responsible for managing the source buffers.
Let me know what you think though!
Hey! We've detected some video files in a comment on this issue. If you'd like to permanently archive these videos and tie them to this project, a maintainer of the project can reply to this issue with the following commands:
Hey everyone, we have an initial fix for this. It's in VHS 2.6.4 and Video.js 7.11.7 pre-release. The current change only detects the error, clears the back buffer, and tries appending again. Eventually, we'd want to make a fix that also splits up large segments into smaller parts and append pieces but also potentially have multiple quota exceeded errors cause a downswitch. Hopefully, the change we have now improves everyone's playback, we'll get back to the other pieces as soon as we can.
Hi! Just sharing thoughts here, but perhaps another approach (with other drawbacks of course) would be to first attempt to clear the back buffer, and try appending the segment again. If the buffer overflows when appending a segment (and the buffer was initially empty), that segment could be skipped.
That would of course not solve everything, but the scenario where there are occasional segments which exceeds the buffer size by themselves would work... better.
We do currently clear the back buffer on the QuotaExceededError, but it seems that Firefox's buffer smaller compared to Chrome, which means that some high bitrate video may not work as a whole segment will be too large for Firefox's buffer. We do want to be able to split up a segment into chunks and appending each chunk, but unfortunately we haven't had the chance to do that yet and doesn't seem like we'll get to it in the near term.
Also, unfortunately, we're not currently set up to be able to skip entire segments.
Hi!, I'm facing this exact issue on Firefox. In order to workaround it, would be OK to pre-process these videos with ffmpeg before sending them to browser?, anybody knows a safe limit to avoid this issue to happen?
Thanks for the project.
Any updates on this?
Description
Unable to play HLS stream while segment size > 20MB
Sources
Any m3u8 playlist with fi (usally 1080p streams)
Steps to reproduce
Explain in detail the exact steps necessary to reproduce the issue.
Results
Expected
Video can be played
Error output
VIDEOJS: ERROR: (CODE:-3 undefined) The quota has been exceeded. Object { code: -3, type: "APPEND_BUFFER_ERR", message: "The quota has been exceeded.", originalError: DOMException } video.js:129:5
videojs-contrib-hls version
latest
videojs version
latest
Browsers
Firefox