Closed squarebracket closed 5 years ago
Note that tests will fail until #1412 is merged in and I can rebase against it.
Rebased against master to make (most) tests pass. Only failing test now is related to VTT cue removal, which will keep failing until videojs/videojs-contrib-media-sources#178 is merged in; I coded the tests for the new media-sources version, and all tests pass locally when using the patched media-sources.
Amazing! Thank you so much! Just double checking the implementation steps to test this. I understand how to fetch the pull request for #1416. Then do I fetch the pull request for https://github.com/videojs/videojs-contrib-media-sources/pull/178 as well. Then rebuild both, upload the dist directories, reference the proper js files, and call it a day?
@genelamarellis pretty much, though you have to make sure contrib-hls is building with the patched version of contrib-media-sources, which it will not do by default. The easiest way I know of to do this:
git clone https://github.com/videojs/videojs-contrib-media-sources.git
git clone https://github.com/videojs/videojs-contrib-hls.git
cd videojs-contrib-media-sources
# <apply PR changes>
npm link
cd ../videojs-contrib-hls
npm link videojs-contrib-media-sources
# <apply PR changes>
npm install
npm run build
Once you've got that done, as you say, upload the dist directory and reference the appropriate file.
Ok thank you so much! I have run all of the commands and I'm waiting for my team to give me an update to see if it worked. Thanks!
Excellent! I'm excited to hear the results.
Hero! This has solved our issues! Thank you so much! I hope this gets merged into the main branch. Life saver! Thanks again!
Great, glad to hear it! It's a fair bit of code, so it might be a bit until it's merged in. Hopefully it won't be too long.
These are just initial questions to help me understand the chosen strategy a little better:
Make goalBufferLength, maxGoalBufferLength, and backBufferLength getters+setters on the master playlist controller, and pass them through to the segment loaders
Rather than have this be getters/setters on the master playlist controller, these could just be settings/options passed into the master playlist controller, then passed down to and modified on the segment loaders.
Set the aforementioned values when we get a QuotaExceededError based on the amount of time and data in the buffers when it occurs
Why does the time matter if we are also using the amount of data in the buffer?
checkBuffer_ checks to see if we might bust the buffer, and trims the back buffer and returns null if that's the case
Seems like trimming the back buffer should be done in another method outside checkBuffer_
Slice up the segment into 5MB chunks if we get a QuotaExceededError on the first append
If I understood correctly, we are appending and removing whole segments in this PR right? Why is it needed to split up the segment into chunks?
Add functions for calculating the min and max bytes in the buffer based on segment metadata cues (playhead is rarely at a segment boundary so we need both min and max functions for maximum safety)
Is there a specific use for the min bytes in the buffer? Would the ideal be to play it safe and always assume the max is in the buffer?
Add function for removing non-buffered segment metadata cues
I thought this exists in contrib-media-sources?
In the SourceUpdater, when we get a QuotaExceededError, move the pendingCallback to another variable (deferredCallback) and null out pendingCallback so that we can process a remove() and try to reappend the segment that caused the error
Reading this naively, it seems like the remove and reappend attempt can be done before the original pendingCallback
is run. Is there something preventing this?
Rather than have this be getters/setters on the master playlist controller, these could just be settings/options passed into the master playlist controller, then passed down to and modified on the segment loaders.
If I understand correctly, you're saying that they could be static for a single playback? If so, then maxGoalBufferLength
could be a statically configured value (and essentially is that), but goalBufferLength
and backBufferLength
must be dynamic to support dynamic buffer sizes.
Imagine this situation, as it may help to explain why. You have configured contrib-hls 30s back buffer and 60s forward buffer, and start playing a high-res stream. At 10s, you bust the size of your browser's buffer and a QuoteExceededError
occurs. That means, you can have buffered at any given time ~10s of the stream. Any more than that, and you've got a problem. But if you're keeping 30s of back buffer, then that means once those 10s are behind your playhead, you won't be able to download any more data (as you won't be ejecting anything from the back buffer). So you'll just end up in an unrecoverable state of not being able to play back anything other than those 10s.
It's a bit more complicated than that, but hopefully it's instructive enough to get across why the dynamic adjusting is important. If I misunderstood and you were motivating for dynamic buffers, but just another way of doing it, then I'm afraid you'll have to be a bit more -vvvv
as I don't get what you're saying.
Why does the time matter if we are also using the amount of data in the buffer?
That's true that data is the important factor here, but the underlying MSE functions (like removing from the buffer) work with time, not data. So we need at least rough estimations of the equivalent time.
That being said, maybe I missed something, so if you can think of a way of doing it without time, I'm all ears!
Seems like trimming the back buffer should be done in another method outside checkBuffer_
It is. checkBuffer_
is simply an algorithm to see whether or not a new segment should be downloaded, based on what is currently in the buffer. In this case, the answer is no, but it also tries to free up stuff in the back buffer since the check for "do we have more than goalBufferLength
in the buffer?" returns false.
If I understood correctly, we are appending and removing whole segments in this PR right? Why is it needed to split up the segment into chunks?
You understand correctly (though it has always been my intention to add subdividing by GOP when making this PR against VHS). We need to split it into 5MB chunks because Firefox will throw a QuotaExceededError
on the first append when the segment is too large. You can trigger it with this feed.
Is there a specific use for the min bytes in the buffer? Would the ideal be to play it safe and always assume the max is in the buffer?
As you'll see in the code, the way things work out, in certain situations the safest thing to do is take the max, and in other situations the safest thing to do is take the min.
I thought this exists in contrib-media-sources?
It removes all cues within a time range. removeNonBufferedSegmentCues
loops through the buffers and removes cues wholly outside the buffered data. It could use removeCuesFromTrack
for the actual removal, but you would still need to determine the time ranges to remove.
Reading this naively, it seems like the remove and reappend attempt can be done before the original pendingCallback is run. Is there something preventing this?
Can you clarify what you mean? Every append has a callback that needs to be run after the append succeeds. If the buffer is full, we need to remove data from the buffer, then append the original data to the buffer, then run the original callback. However, since the SourceUpdater
needs to act like an abstraction of a native SourceBuffer
, the SegmentLoader
needs to make sure that sourceBuffer.updating === false
, otherwise any remove or append operations will fail. In SourceUpdater
, updating
will be true
if pendingCallback_
is not null. I experimented with other ways of having updating
be false, but I could never get them working.
This is certainly a place in the code where a fresh pair of eyes could help.
If I understand correctly, you're saying that they could be static for a single playback? If so, then maxGoalBufferLength could be a statically configured value (and essentially is that), but goalBufferLength and backBufferLength must be dynamic to support dynamic buffer sizes.
heh, sorry I definitely was not being specific enough in my last comment. I think we're on the same page in so far as the need to dynamically change the buffer sizes. What I meant was that instead of have getters/setters for the goalBufferLength
, maxGoalBufferLength
, and backBufferLength
at the masterPlaylistController level, they could be getters/setters at the segmentLoader level. That would allow different values for different loader types, though admittedly I don't see an immediate need for that level of flexibility.
That's true that data is the important factor here, but the underlying MSE functions (like removing from the buffer) work with time, not data. So we need at least rough estimations of the equivalent time.
Oh interesting, didn't actually realize that remove
used times 🤔 I suppose a potential alternative would be to try and match a TimeRanges object returned from buffered()
. Not sure if that would make a significant difference though, depending on how remove
actually goes about removing data from a sourceBuffer.
It is. checkBuffer_ is simply an algorithm to see whether or not a new segment should be downloaded, based on what is currently in the buffer. In this case, the answer is no, but it also tries to free up stuff in the back buffer since the check for "do we have more than goalBufferLength in the buffer?" returns false.
What I meant was that the order of execution could be:
In this scenario the functions of (i) trimming the back buffer and (ii) checking if we should download the next segment are clearly separated.
As you'll see in the code, the way things work out, in certain situations the safest thing to do is take the max, and in other situations the safest thing to do is take the min.
Interesting 🔍
It removes all cues within a time range. removeNonBufferedSegmentCues loops through the buffers and removes cues wholly outside the buffered data. It could use removeCuesFromTrack for the actual removal, but you would still need to determine the time ranges to remove.
Gotcha
Can you clarify what you mean? Every append has a callback that needs to be run after the append succeeds. If the buffer is full, we need to remove data from the buffer, then append the original data to the buffer, then run the original callback. However, since the SourceUpdater needs to act like an abstraction of a native SourceBuffer, the SegmentLoader needs to make sure that sourceBuffer.updating === false, otherwise any remove or append operations will fail. In SourceUpdater, updating will be true if pendingCallback_ is not null. I experimented with other ways of having updating be false, but I could never get them working.
Interesting 🤔 Sounds like you hit a wall with how the code is currently setup. I'll have to take a more in-depth look today to see if there's another way to do this that doesn't involve nulling out pendingCallback_
. I'll let you know if I come up with something half-usable 😄
they could be getters/setters at the segmentLoader level. That would allow different values for different loader types, though admittedly I don't see an immediate need for that level of flexibility.
Right ok, I understand what you're getting at now. That was actually the original design I implemented but found that I needed to change it to what you see now. That's probably a design decision I should have documented in the PR description.
IIRC, it was something related to switching rates. I may just be thinking of this, but I feel like there was something else that caused me to pull that functionality into the master playlist controller.
If you'd like I can try pushing them down to the SegmentLoader
. This code did go through several heavy rewrites and redesigns, so it's entirely possible that the requirements at the time are no longer needed.
I suppose a potential alternative would be to try and match a TimeRanges object returned from
buffered()
.
I'm not sure I follow here? We want to know the amount of time in the buffer when we get a QuotaExceededError
so that we can
a) know how much forward buffer (in s) we should have before we stop downloading chunks (because we don't want to put too many bytes into the buffer)
b) pass time-based parameters to MSE functions (like remove(0, currentTime - backBufferLength)
)
It's annoying, but everything uses time, except for the error which is triggered by bytes.
What I meant was that the order of execution could be:
- trim back buffer if needed and adjust other buffers to match
- checkBuffer_ to see if we need to download the next segment
True, we could trim the back buffer on every loop, and not only when it looks like the next segment will bust the buffer. That way, checkBuffer_
(mostly) remains a series of yes/no questions that take no actions.
There is kind of an open question to me, here, which is how we should prioritize the various goals (and thus how/when we should trim/not download). For instance, say you've got a back buffer goal of 10, a forward buffer goal of 30s, and GOPs are every 2s. Imagine that your forward buffer falls below 30s, but you see that the next segment will burst your buffer. Should slice a couple GOPs off your back buffer, even if it brings it down to 6s? Or should you just wait until you can download that segment and not trim off any more than 10s?
This is a part of the PR that I very much want feedback on, so I'm anxious to hear what you think is the best behaviour once you've gone through all the code.
Sounds like you hit a wall
That is 100% how I would describe it, yes :sweat_smile:
If you'd like I can try pushing them down to the SegmentLoader. This code did go through several heavy rewrites and redesigns, so it's entirely possible that the requirements at the time are no longer needed.
Let's hold off until we get some of the comments around the getter/setter methods resolved, maybe there's just confusion on my part on how this all connects together 👍
It's annoying, but everything uses time, except for the error which is triggered by bytes.
Yeah, now that I've seen the rest of the code it wouldn't make sense to do things differently just for this part of the code 👍
There is kind of an open question to me, here, which is how we should prioritize the various goals (and thus how/when we should trim/not download). For instance, say you've got a back buffer goal of 10, a forward buffer goal of 30s, and GOPs are every 2s. Imagine that your forward buffer falls below 30s, but you see that the next segment will burst your buffer. Should slice a couple GOPs off your back buffer, even if it brings it down to 6s? Or should you just wait until you can download that segment and not trim off any more than 10s?
Hmm, this is a good question. Typically, I would say that we'd want to prioritize the forward buffer(to allow smoother rendition switches) but that choice could be potentially harmful when working with very large segments. Going to bring this to other maintainers on Monday too!
That is 100% how I would describe it, yes 😅
😆
So I noticed a few things while testing this:
1️⃣ safeBackBufferTrimTime can return negative numbers 😭
2️⃣ Should timeupdateHandler
be turned off once we call remove? That's the earliest point that we can do it, we don't want to run remove multiple times for the same values. Currently it's possible that we'll call remove while a remove is in process.
3️⃣ It would be nice to have helpers in MPC for forwardBufferLength
(separate from goalBL) and backBufferLength
that just tell you exactly how many seconds are in the buffer ahead of and behind the playhead(it returns a static value and is a general convenience method)
forwardBufferLength = () => buffered.end(last) - currentTime
backBufferLength = () => currentTime - buffered.start(first)
4️⃣ A couple different observations on adjustBuffers()
:
adjustBuffers
in checkBuffer_ I think that this is actually not useful as trimBackBuffer should be ensuring that the forwardBuffer is prioritized. If it is not doing that, then we have an issue with how we are calculating safeBackBufferTrimTime. It's likely that we are "seeing more back buffer than forward buffer" because we are using the old backBuffered time and we should wait for trimming to be complete via updateend
before checking the buffered times again anyway.adjustBuffers
we don't actually look at how much is actually in the backBuffer. As far as I understand, backBufferLength
is an actual value in the buffer rather than a goal so this doesn't seem correct. backBufferLength
should reflect the time in the buffer behind the playhead.adjustBuffers
event seems undesirable since I believe we want to adjust buffers after removing the appropriate amount of back buffer? Either way, having adjustBuffers() be called more clearly either before or after handleBufferMaxed would be good.5️⃣ For minBufferedBytes
at least, it seems to be used in 2 ways:
i. to calculate the min possible bytes in the buffer
ii. to figure out how many bytes might be in a range of buffered time: https://github.com/squarebracket/videojs-contrib-hls/blob/79f53fed214a241df7e02801fb52deb1ca662ce1/src/segment-loader.js#L1369.
I think it would be much cleaner if there two purposes were split out into two separate methods. minBufferedBytes
would be for purpose i. and another method to calculate the bytes for a time range. Maybe bytesInTimeRange(start, end)
.
6️⃣ This series of things happens twice in handleBufferMaxed: https://github.com/squarebracket/videojs-contrib-hls/blob/79f53fed214a241df7e02801fb52deb1ca662ce1/src/segment-loader.js#L1387-L1394
safeBackBufferTrimTime
minBufferedBytes
if you can trim off more than the segment that caused an error then
removeWholeSegments
It would be great to pull that out into another method to make it easier to read and debug.
After looking at the code for a bit, I think that the code has portions of a couple different solutions in it(undefined browser behavior is fun!) and they just need to be unified. Here's a bit of pseudocode with a slightly different alternative to how you have things set up:
When a QuotaExceededError
happens:
maxNumberOfBytes
) and seconds in the buffer (maxNumberOfSeconds
) so we can use this to determine the new caps for the buffer.maxNumberOfSeconds
is very close to or less than the current goalBufferLength value. However, we could make the goalBufferLength
method return a static value of the config goalBufferLength value to ensure we don't grow over that value.When checking the buffer to see whether to download the next segment:
:one: Fixed.
:two: Good question. It's probably fine to remove the handler when it thinks it can append, provided the calculations are correct 100% of the time and the browser isn't engaged in shenanigans. You're right that it's possible remove
will be called twice, in which case an InvalidStateError
would be thrown.
:three: Did we conclusively decide on renames? backBufferLength
is already a function on the MPC.
:four: I think you're misunderstanding the point of adjusting the buffers. It is unrelated to actual buffered data, simply on setting the desired lengths of the forward/back buffer. Trimming the back buffer simply lobs off data. In order to reduce priority on the back buffer and increase priority on the front buffer, the respective goal lengths must change. Or I suppose, I should say, that's the way I understand priority -- if you have a fixed amount of time that you can buffer (which is the case after a QuotaExceededError), then increasing priority on the forward buffer means allocating more of that overall fixed time to the forward buffer.
At first glance, it does appear that I could perhaps be using the forward buffer here instead of secondsInBuffer
. But I will need to meditate on how that will change things.
It's likely that we are "seeing more back buffer than forward buffer" because we are using the old backBuffered time and we should wait for trimming to be complete via updateend before checking the buffered times again anyway.
I find this extremely unlikely. In my experience the problem comes from not being able to set the back buffer less than a target duration (which is of course done since we aren't slicing on GOPs).
As far as I understand, backBufferLength is an actual value in the buffer rather than a goal
This is incorrect; it is the equivalent of goalBufferLength
for the amount to buffer behind the playhead.
backBufferLength should reflect the time in the buffer behind the playhead.
No, semantically it should be the target amount of back buffer. In the latest code it is called backBufferGoal
.
Having the adjustBuffers event seems undesirable since I believe we want to adjust buffers after removing the appropriate amount of back buffer?
No, definitely not this. To simplify the explanation, assume a fixed bitrate stream. When you get the QuotaExceededError
, you know that staying within currentlyBufferedSeconds
of total (fwd + back) buffer will mean you will never get another QuotaExceededError
, because it's the next append that pushed you over the top. This is a good conservative estimate, without being too conservative.
If you trimmed your back buffer before adjusting the buffers, that means you'd have smaller buffer sizes than you can actually handle. In this case, you'd be going too conservative, which means a greater likelihood of getting stuck.
Either way, having adjustBuffers() be called more clearly either before or after handleBufferMaxed would be good.
That is actually exactly what is done, see source-updater.js. It triggers adjustBuffers
then it triggers event
(which is bufferMaxed
).
:five: No, it is only ever used for both those functions described. It calculates the lower limit on the amount of bytes buffered within the range start, end
. This is because the browser may yank bytes from the buffer, so you always have to make sure your stored values haven't changed without you knowing. See this note in the spec.
However, upon closer reading of the spec, it seems to say that coded frame eviction will take place only after appending to the SourceBuffer, so we could potentially run removeNonBufferedSegmentCues
on the append handler, and then remove all the buffer checking nonsense from minBufferedBytes
and maxBufferedBytes
, which would be fucking fantastic.
:six: The section you highlighted doesn't repeat. That first call to safeBackBufferTrimTime
is actually inside an else
block.
Trixy hobbitses, I know.
My questions/comments about your approach:
SourceUpdater
would trigger on the SegmentLoader
which would trigger on the MPC? Also, you don't need to pass those values since they're accessible by the MPC, but that's just some pedantry.maxNumberOfBytes
and maxNumberOfSeconds
. Always adjust your maxes before trimming.use the maxNumberOfSeconds to determine whether we need to set a new goalBufferLength value and maxGoalBufferLength value
When would this not be the case? We're never going to buffer more than forwardBufferGoal + backBufferGoal. So we'd never hit a QuotaExceededError
where we don't need to adjust the buffer sizes.
prioritizing having more seconds for the forward buffer than the back buffer
How would you do this? A sliding scale like GBL? Currently I'm doing newBackBufferGoal = 1/3 * maxNumberOfSeconds
and maxForwardBufferGoal = 2/3 * maxNumberOfSeconds
.
We only need to set a new one if the maxNumberOfSeconds is very close to or less than the current goalBufferLength value.
As explained above, it's impossible to have a QuotaExceededError
and have more time in the buffer than the GBL, since it determines scheduling of downloads.
don't change the goalBufferLength after this if the maxNumberOfSeconds does not change.
Why would trimming the back buffer cause maxNumberOfSeconds
to change? Unless you're saying, just remove the buffer readjustment that I am doing.
if we are unable to remove enough from the buffer to download a new segment, we must error, we cannot recover as the segment sizes are too large.
We only know we're screwed for sure if we've stalled AND the next segment will bust the buffer, since any advancement of the playhead may result in us being able to trim some more back buffer. We do know, though, that if we try to readjust the back buffer, and we're already at a target duration, then we're in danger.
Updated some code, haven't updated tests.
2️⃣ Do you know if an InvalidStateError
would be bubbled up to the player?
3️⃣ I believe backBufferLength
is new in this PR. From talking to the rest of the contributors, we should be able to rename goalBufferLength()
as well since it's an internal API. It was suggested that the configuration values GOAL_BUFFER_LENGTH
and MAX_GOAL_BUFFER_LENGTH
not be changed for backwards compatibility. Maybe we could add deprecation warnings to the getter/setters for those values and create an issue to remind us to change them on the next major(this can be moved to VHS once contrib-hls is merged in).
4️⃣ In the case where we have a fixed buffer length(after a QuotaExceededError
) I would expect that we could remove enough from the back buffer until we get larger sized segments and cannot safely remove more from the back buffer. Is that your understanding as well? Perhaps the buffer lengths need to be adjusted immediately after we detect this scenario then, to group the logic in one place.
I find this extremely unlikely. In my experience the problem comes from not being able to set the back buffer less than a target duration (which is of course done since we aren't slicing on GOPs).
Would be cool to get a usage event for this case.
That is actually exactly what is done, see source-updater.js. It triggers adjustBuffers then it triggers event (which is bufferMaxed).
What I actually meant to just call adjustBuffers()
in handleBufferMaxed_
. The event seems to not be useful outside the bufferMaxed usecase, unless I missed another scenario.
5️⃣ I agree that this portion of the spec seems to imply that the frame eviction will occur during the execution of an append on the source buffer. If we can remove some of the buffer bounds modifications, that would be great!
If not, I think it would be clearer if this was removed and the contract for the function is:
minBufferedBytes(start = 0, end = Infinity) {
which would set the default values, or alternatively replace with:
if (start === undefined) {
start = 0;
}
if (end === undefined) {
end = Infinity;
}
just so it's absolutely clear what case is being handled.
6️⃣ I was suggesting restructuring that bit of code so you could reuse a function:
function trimSegmentsForSeconds(removeToTime, bytesTried) {
bytesTrimmable = this.minBufferedBytes(0, removeToTime);
if (bytesTrimmable >= bytesTried) {
this.removeWholeSegments(removeToTime);
return true;
}
return false;
}
and then restructure this as:
if (safeRemovePoint) {
removeToTime = safeRemovePoint;
} else {
removeToTime = safeBackBufferTrimTime(this.seekable_(),
this.currentTime_(),
this.playlist_.targetDuration || 10,
this.backBufferLength_());
}
var removedWholeSegment = trimSegmentsForSeconds(removeToTime, bytesTried);
if (!removedWholeSegment) {
let **passedSegmentBoundary** = this.currentTime_() + this.playlist_.targetDuration;
let buffered = this.buffered_();
let **tooCloseToBufferEnd** = buffered.end(buffered.length - 1) - 2;
timeupdateHandler = () => {
let currentTime = this.currentTime_();
if (currentTime < passedSegmentBoundary) {
return;
} else if (currentTime > tooCloseToBufferEnd) {
this.removeWholeSegments(currentTime - this.playlist_.targetDuration);
return;
}
removeToTime = safeBackBufferTrimTime(this.seekable_(),
currentTime,
this.playlist_.targetDuration || 10,
this.backBufferLength_());
trimSegmentsForSeconds(removeToTime, bytesTried);
which makes the intention of the code a little clearer. (the **whatever** is just because I changed those variable names)
I think we discussed the majority of differences between my suggested algorithm and the PR out-of-band, so I'll just address some of the ones we didn't.
Why would trimming the back buffer cause maxNumberOfSeconds to change? Unless you're saying, just remove the buffer readjustment that I am doing.
I think, as I mentioned earlier in this comment, the specific case that would actually require changing the goal values after making the buffer goal a static value(after QuotaExceededError
) would be if the segment sizes/quality increase to a point that it requires the forward goal buffer value to decrease. I think I'm not 100% convinced yet that a back buffer goal is needed as a separate concept from "remove x amount from the buffer".
That would require some redesigning of the solution, however, and retesting so it's mostly up to you. The change you suggested in your last comment, continuing to use forwardBufferGoal as the newForwardGoal and then reducing the backBufferGoal based on what the total available buffer is, is similar to what I was trying to go for.
We only know we're screwed for sure if we've stalled AND the next segment will bust the buffer, since any advancement of the playhead may result in us being able to trim some more back buffer. We do know, though, that if we try to readjust the back buffer, and we're already at a target duration, then we're in danger.
If we aren't able to download the next segment and we're close to running out of buffer, haven't we stalled? For example if we can only fit one segment in the buffer because the segment sizes are that large.
Something I didn't express very clearly in my last comment was that there seems to be 3 strategies employed at the moment in this PR:
I wonder if we could simplify the code by whittling this down into 2 strategies, perhaps i and ii
or ii and iii
(where iii would be used to prevent the Firefox error).
Hopefully this will be looked more into in the future. Can't play 1080p videos. After 30 minutes of playing the video I get an error:
video.js:128 VIDEOJS: ERROR: (CODE:-3 undefined) Failed to execute 'appendBuffer' on 'SourceBuffer': The SourceBuffer is full, and cannot free space to append additional buffers. MediaError
logByType @ video.js:128
.ts files about 20-30 MB each.
Thank you for your PR but we will not be accepting new changes for this repo and will be archived very soon. I would advise you to open your PR against the next iteration of this project at https://github.com/videojs/http-streaming.
Description
This is a PR to allow for streams with large segment sizes, such as with 4K media. When a
QuotaExceededError
happens, the forward and back buffer are dynamically resized, and we keep track of how much data is in the buffer. We then use these as checks within the logic for determining if a segment should be downloaded, or if we should just chill.Note that this PR doesn't use any GOP info for determining what's safe or not. The reason is that we'd have to parse out the GOP info for fMP4 streams. I figure the best time to add that functionality is in VHS, where a) media-sources is part of the code and b) we'll have to parse fMP4 stuff anyway for things like captions. So that's planned for when this is ported to VHS.
Requires the code in videojs/videojs-contrib-media-sources#178 to work.
Specific Changes proposed
BACK_BUFFER_LENGTH
configgoalBufferLength
,maxGoalBufferLength
, andbackBufferLength
getters+setters on the master playlist controller, and pass them through to the segment loadersQuotaExceededError
based on the amount of time and data in the buffers when it occurssafeBackBufferTrimTime
take abackBufferLength
argument to use instead of the default 30snextSegmentSize
in the segment loader to determine if we might bust the buffer, either based on the current playlist'sBANDWIDTH
attribute if it exists or a running average of segment sizes if notcheckBuffer_
checks to see if we might bust the buffer, and trims the back buffer and returnsnull
if that's the caseQuotaExceededError
on the first appendQuotaExceededError
after the first appendSourceUpdater
, when we get aQuotaExceededError
, move thependingCallback
to another variable (deferredCallback
) and null outpendingCallback
so that we can process aremove()
and try to reappend the segment that caused the errorRequirements Checklist