w3c / webcodecs

WebCodecs is a flexible web API for encoding and decoding audio and video.
https://w3c.github.io/webcodecs/
Other
985 stars 136 forks source link

decodeQueueSize/encodeQueueSize is not updated on the task where dequeue event is fired #724

Open youennf opened 1 year ago

youennf commented 1 year ago

The usual pattern is to update a read-only attribute value and synchronously fire the corresponding event. This is not the case for decoders/encoders, I wonder why. Note this means that the timing where the control messages are run is somehow observable to JS.

Also, now that we execute the error callback synchronously as part of the close encoder/decoder algorithm, the corresponding dequeue event is fired in a task after the error callback execution. This might be a bit surprising to applications.

I wonder whether we could consider something along those lines:

sandersdan commented 1 year ago

I agree that it makes sense to change the observable values and event emission atomic, which I think means snapshotting the queue size and updating that snapshot when the event is emitted.

What is the reason for firing the dequque event for non-AbortError cases?

sandersdan commented 1 year ago

After a quick discussion about this, one issue to consider is that JS may be appending a new chunk concurrently with the decode process dequeuing a chunk. It's probably okay for a the enqueue loop to terminate early, but synchronizing the adds/removes with event snapshots may be complicated.

I'll review earlier discussions to see if there was a specific reason the current design was selected.

sandersdan commented 12 months ago

The original discussion happened primarily in issue #494. It seems that having the queue size not increment in cases where the codec processes the chunk immediately was considered desirable, so that apps could append more immediately.

I'm no longer sure if this is the desired behavior. As noted in the thread, there are codecs that can synchronously process an unlimited number of chunks, and it may be better for apps to release control every so often in in that case.

None of these considerations are affected in close cases, so I don't have any objections to those changes (although I would prefer to never emit a queue size change event once closing starts).

youennf commented 12 months ago

I would prefer to never emit a queue size change event once closing starts.

I do not really see any justification for it atm. Let's start with this then. @aboba, @padenot, thoughts?

The original discussion happened primarily in issue #494. It seems that having the queue size not increment in cases where the codec processes the chunk immediately was considered desirable, so that apps could append more immediately.

I'm no longer sure if this is the desired behavior.

I'll try to read https://github.com/w3c/webcodecs/issues/494. In general, I am fuzzy about satured. For instance, how is the saturated check done from the control thread given the spec says that any step referencing [codec implementation] should be via enqueuing in the codec queue?

Would the following sketch work in that new world?

I guess we could decrement synchronously if not saturated as part of this algorithm, if we deem it appropriate in some specific conditions. Something like: [[control message queue]] is empty at the time we do the saturated check and we are still in the encode method.

sandersdan commented 12 months ago

how is the saturated check done from the control thread

The parallel queue is not necessarily a thread.

As noted in #494, Chrome's implementation processes the work queue synchronously when work is enqueued by JS. This is observable in two ways:

  1. The queue size can be decremented while work is being enqueued. (Eg. decodeQueueSize doesn't change when calling decode(), or even becomes smaller.)
  2. Flush promises may be resolved synchronously while work is being enqueued (observable by messing with Object.prototype).

(1) was considered desirable in #494, but I don't know how to reconcile it with both the current spec text and JS API norms. (2) can be fixed in Chome and I consider it a quality-of-implementation issue (although I'm not actually sure if this violates the rules of a 'parallel queue').

Proposed in this thread is also (3) queue size changes could be synchronously observable. In this case Chrome's implementation would probably still need to queue the events and apply a snapshotting system like you described to make it look as though the queue is asynchronous.