WebAudio / web-midi-api

The Web MIDI API, developed by the W3C Audio WG
http://webaudio.github.io/web-midi-api/
Other
323 stars 49 forks source link

Backpressure exposure for asynchronous send() #158

Open agoode opened 8 years ago

agoode commented 8 years ago

We have so far been able to use synchronous send but this provides no mechanism for backpressure.

The current implementation in Chrome uses large internal buffers to hide this issue, but this is wasteful and results in the browser silently dropping messages when the buffer is overrun. This condition is not common with normal MIDI messages, but sysex messages can be of arbitrary length and can be slow to send, so Chrome also puts limits on those as well.

In order to allow implementations to avoid these buffers and arbitrary limits, we need a mechanism for non-blocking send.

yutakahirano commented 8 years ago

Is Streams useful for the use case? cc: @domenic @tyoshino

domenic commented 8 years ago

This does sound like a pretty good fit for a writable stream... In fact MIDIOutput looks pretty WritableStream-like in general. It is specifically designed to take care of this queuing and backpressure-signaling concern for you, so that other specs can build on top of it and reuse the architecture.

I'm not sure about MIDIInput, as I'm not familiar enough with MIDI. The choice would be between the current spec's no-backpressure, event-based model (where if you aren't subscribed you miss the event), vs. a readable stream model with the data buffered for future reading, and potential backpressure when it's not read.

Unfortunately the spec's current setup where MIDIInput and MIDIOutput both derive from MIDIPort doesn't seem very easy to fit with the stream API. I'm not sure how we'd do this without just creating a parallel WritableMIDIPort type (and maybe ReadableMIDIPort).

There's also the issue that writable streams need some spec love, but if this is a urgent use case I can turn my attention to them and get that straightened out quickly.

agoode commented 8 years ago

The event-based model for MIDIInput is fine, since MIDI is essentially a multicast system, with no backpressure possible at the protocol level. MIDI doesn't guarantee reliable delivery.

Can WritableStream handle framing? We do parse the bytes and require that they are valid MIDI packets (1, 2, or 3 bytes in the common case, or arbitrary length in the case of sysex). Is this implementable as chunks in the streams standard? Can a stream reject an invalid chunk?

domenic commented 8 years ago

Can WritableStream handle framing? We do parse the bytes and require that they are valid MIDI packets (1, 2, or 3 bytes in the common case, or arbitrary length in the case of sysex). Is this implementable as chunks in the streams standard? Can a stream reject an invalid chunk?

Yes, for sure. The underlying source's write hook (and optional writev hook, once we get that working) can process the data in arbitrary ways, returning a promise for when the processing is finished---which can be rejected to indicate that the input was invalid.

jussi-kalliokoski commented 8 years ago

:+1: for using streams from me.

We could add a new method to MidiPort: Promise<Stream> openAsStream() to preserve bw compat.

However, how do we solve sending timed messages?

bome commented 8 years ago

I'm somewhat reluctant for adding streams, it's seems like mainly adding clutter to the WebMIDI API and implementation, while providing very little new functionality. Can't we just add a function sendAsync(data, timestamp) with a Promise as return, or define a MIDIOutput listener which fires whenever a MIDI message is delivered?

results in the browser silently dropping messages

That's really bad. In that case, the send() function should either block, or throw an exception of some sort (which would need to be defined in the spec).

cwilso commented 8 years ago

I'm REALLY uncomfortable rebasing on top of Streams, since it's highly unlikely you would be piping any readablestream into a MIDI output device. I think the best way to solve this is have send() return a Promise (that resolves when it's been sent to the hardware), and expose the "current available output message size" (i.e. the maximum chunk size you can write, right now). This would enable things that currently work to keep working; it would enable developers who want to send large amounts of sysex data to do so, too.

On Thu, Feb 11, 2016 at 9:53 AM, Florian notifications@github.com wrote:

I'm somewhat reluctant for adding streams, it's seems like mainly adding clutter to the WebMIDI API and implementation, while providing very little new functionality. Can't we just add a function sendAsync(data, timestamp) with a Promise as return, or define a MIDIOutput listener which fires whenever a MIDI message is delivered?

results in the browser silently dropping messages That's really bad. In that case, the send() function should either block, or throw an exception of some sort (which would need to be defined in the spec).

— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-midi-api/issues/158#issuecomment-182786527 .

jussi-kalliokoski commented 8 years ago

I think the best way to solve this is have send() return a Promise (that resolves when it's been sent to the hardware), and expose the "current available output message size"

This would just be duplicating stream functionality, except in a more awkward and less interoperable way. I can see plenty of use cases for MIDI messages in Streams. For example:

domenic commented 8 years ago

I agree that such a design is just duplicating streams and will need to reinvent the queuing mechanisms and so forth they are specifically designed to let other specs reuse. The "current available output message size" is further evidence of that kind of duplication (of the desiredSize property of a writable stream, after https://github.com/whatwg/streams/issues/318 is implemented). It's similar to reinventing an IDBRequest-style object when promises are available, based on the speculation that nobody would ever need to reuse the return value inside a promise callback.

However, how do we solve sending timed messages?

I'd assume each chunk would be of the form { timestamp, data } or similar.

jussi-kalliokoski commented 8 years ago

I'd assume each chunk would be of the form { timestamp, data } or similar.

That sounds reasonable.

cwilso commented 8 years ago

Can someone who understands streams put together a sketch of an API based on it, and examples of simple and complex usage?

jussi-kalliokoski commented 8 years ago

Maybe something like this:

interface MIDIOutput {
  // ...
  Promise<WritableStream> openAsStream({ size, highWaterMark });
}

interface MIDIInput {
  // ...
  Promise<ReadableStream> openAsStream({ size, highWaterMark });
}

here's 2 usage examples - the first is just a dumb non-synced sequencer writing to the first available MIDIOutput, the second one takes advantage of piping streams to pipe from first MIDIInput to first MIDIOutput, filtering out messages that aren't note on or note off, pushing all messages to channel 3, pitch shifting by 2 octaves and buffering based on number of messages at high water mark of 1000.

EDIT: Note that the advanced example uses TransformStream which is not in the Streams spec yet.

toyoshim commented 8 years ago

Reply for the first Adam's description: send is not synchronous at all. If so, we can just return boolean, but we cannot. Also generally said we should avoid having synchronous APIs as much as possible on the main thread. So, what we want to do in this thread is to add a reliable send-like method.

Maybe using Streams would be a right approach. But I feel it's a little complicated as Chris said. Also, I feel Web MIDI should be aligned with other similar modern APIs. For instance Web Bluetooth and WebUSB return Promise.

So my preference is bome's sendAsync like thing. But, I feel sendAsync is confusing name since as I said existing send is also async.

Here is my proposal. send: a user can use it for short messages, and sysex. But internal implementation may drop a message if a user want to send many data in a short time. Also very long sysex mesages would fail to send always. sendLong: returning Promise. could not be called twice before the previous call finish. internal implementation never drop a message for buffer limits, but it does not ensure that the message reaches to the device because MIDI transport layer does not have any fail-safe mechanisms and the message could be dropped on immediate browser and OS shutdown. We probably need to allow sending long sysex messages as fragments.

domenic commented 8 years ago

The sketch in https://github.com/WebAudio/web-midi-api/issues/158#issuecomment-184102371 is pretty reasonable, although we'd update it to use the writer design. I'll add some explicit demonstrations of backpressure support:

destination.openAsStream().then(s => s.getWriter()).then(async (writer) => {
  console.log(writer.desiredSize); // how many bytes (or other units) we "should" write

  // note: 0 means "please stop sending stuff", not "the buffer is full and
  // we will start discarding data". So, desiredSize can go negative.

  writer.write({ data: data1, timestamp: ts1 }); // disregard backpressure

  // // wait for successful processing before writing more
  await writer.write({ data: data2, timestamp: ts2 });

  await writer.waitForDesiredSize(100); // wait until desiredSize goes >= 100

  writer.write({ data: oneHundredTwentyBytes, timestamp: ts3 });

  await writer.waitForDesiredSize(); // wait until desiredSize goes >= high water mark

  // also, maybe default timestamp to performance.now()?
});

I might also collapse @jussi-kalliokoski's .openAsStream(new CountQueuingStrategy({ highWaterMark: 1000 })) into .openAsStream({ highWaterMark: 1000 }).

toyoshim commented 8 years ago

After thinking about incremental writing of a large sysex, I noticed that it will allow a malicious attack to lock all output ports exclusively. E.g. just sending a "sysex start" byte will lock the port forever. So, we should keep on having the restriction that a user can not send an incomplete or a fragment message.

So even if we have a back-pressure, the sysex size will be limited to the maximum size of ArrayBuffer.

toyoshim commented 8 years ago

@domenic Have you ever talked with Web Bluetooth and WebUSB guys before? It would be great if these all APIs include Web MIDI are consistent in terms of API design. So, if there was a discussion, I'd hear opinions discussed there.

domenic commented 8 years ago

In an offline thread @cwilso mentioned that @jussi-kalliokoski's examples are too complex and he'd like a three-liner. Here you go (slightly changed from the above since I am not sure why @jussi-kalliokoski made stream acquisition async):

const writer = midiOutput.asStream().getWriter();

await writer.write({ data: d1, timestamp: ts1 });
await writer.write({ data: d2, timestamp: ts2 });
bome commented 8 years ago

@toyoshim you can intersperse short messages during an ongoing sys ex message, and the implementation could also use a timeout to abandon stalled sys ex messages after a while.

Silently dropping MIDI messages is always bad, maybe the current send() function can return a bool: false on buffer full.

I agree that sendAsync is not good, but sendLong is also misleading. Also, why wouldn't it be possible to call your proposed sendLong function again before the previous one finished? I would welcome "unlimited" buffering there! So, maybe sendBuffered? or spelled out: sendWithPromise?

domenic commented 8 years ago

Have you ever talked with Web Bluetooth and WebUSB guys before? It would be great if these all APIs include Web MIDI are consistent in terms of API design. So, if there was a discussion, I'd hear opinions discussed there.

I previously talked with Jeffrey about Web ... Bluetooth? ... and the conclusion was that since there was no backpressure support it wasn't urgent and we could wait on adding streams until later.

agoode commented 8 years ago

Stream acquisition is async since you could block for an arbitrary amount of time with an OS-level open() call.

domenic commented 8 years ago

Stream acquisition is async since you could block for an arbitrary amount of time with an OS-level open() call.

That's fine; that just means that the first write() won't succeed (or fail) until the open comes back. The stream object can still be used, traditionally, in these scenarios.

agoode commented 8 years ago

Is there a way to force the open to complete, to ensure we don't have to wait until the write to determine success?

domenic commented 8 years ago

I don't understand what "force the open to complete" would mean. You can see an example implementation here if it helps: https://streams.spec.whatwg.org/#example-ws-backpressure (unfortunately the usage example still does not have writers and uses the coarse-grained "waiting" signal instead of the fine-grained desiredSize discussed above). The file descriptor gets opened immediately upon stream creation, but the stream machinery takes care (both in specs and in implementations!) of making sure not to write to the descriptor until that successfully completes.

agoode commented 8 years ago

So, we currently have the notion of pending in Web MIDI. This is the state where open has not yet fully completed. In streams, I think pending would mean desiredSize = 0 and open would mean desiredSize != 0?

It's good to know when the MIDI port is fully open, since we can signal in a UI that "preflight" is complete and all devices are fully ready to send/receive.

toyoshim commented 8 years ago

@bome Web MIDI backend needs to multiplex all sending messages from multiple MIDIAccess and once one of it contains 'sysex start' byte, and it was sent to an actual device, we can not abandon stalled sysex in any way, right?

Making send return a boolean sounds possible, but it never means the data is sent to the device, but just means the data is successfully buffered.

The answer for the second question of only one request is we need to ensure message orders in cases where one of in-flight message fails in the middle. Please imagine a case there the second request fails asynchronously, and a user already send the third request that may success. That will cause something a user do not expect.

agoode commented 8 years ago

Let me introduce a classic MIDI use case, and folks can weigh in on various ideas for making it work.

An SMF file contains a stream of <timestamp><event> pairs (basically the same as your proposed chunk description above), which we want to stream to an underlying MIDI sink at exactly the correct time. MIDI itself has no concept of timestamps, so something has to schedule this. SMF files can trivially take arbitrary amounts of wall clock time to play, with arbitrary gaps between sounds.

Right now, Web MIDI clients have to schedule themselves to submit some number of events with timestamps, and use setInterval to remind themselves to send some more. We should be able to do better than this.

If we want to get fancy, allow for user-supplied tempo changes which take effect immediately, while the SMF is streaming.

toyoshim commented 8 years ago

@domenic hum... It probably makes sense that Web Bluetooth does not need Streams at this point. Bluetooth defines write ack in the protocol level, and OS level APIs seem to expose this model directly. So, mapping write and write ack to a write method with returning Promise sounds a straight-forward reasonable solution.

But, I believe WebUSB will need Streams more than Web MIDI does.

toyoshim commented 8 years ago

Here is my answer for async or sync asStream.

Probably we should make MIDIOutput explicitly require MIDIOutput.open completed before calling asStream. Since buffer allocation happens in renderer side, remaining tasks for asStream could finish synchronously.

toyoshim commented 8 years ago

For SMF playback, I'd prefer to use requestAnimationFrame(timestamp => { }) even though my task isn't related to graphics. We can calculate delta time in the callback, and send next messages that won't make it in the next callback cycle as estimated with the calculated delta.

yutakahirano commented 8 years ago

How about the close operation? I mean, what is the relation between WritableStreamWriter.close and MidiOutputPort.close?

toyoshim commented 8 years ago

IMO, WritableStreamWrite.close would better to close the Streams pipe instance only, and does not close the actual MIDIOutput channel. On the other hand, if the MIDIOutput channel is closed, the Streams instance should be closed automatically and any in-flight operations will be aborted.

cwilso commented 8 years ago

Why does WebBluetooth (and WebUSB) lack backpressure? Particularly considering those are two of the most common transports for MIDI, that seems untrue...

await writer.write({ data: d1, timestamp: ts1 });

What determines that object format? And what does a corresponding input scenario look like?

Right now, Web MIDI clients have to schedule themselves to submit some number of events with timestamps, and use setInterval to remind themselves to send some more. We should be able to do better than this.

No, we really, really should not. If we do, we are creating a sequence system; you can certainly do that on TOP of what we're building here, but it should not be the low-level API. In the same way that Web Audio defines a system you can build a sequencer on top of, but does not have a build in rescheduling capability, Web MIDI should not either.

As for building that sequencer - you shouldn't use requestAnimationFrame, because rAF is chained to when the page is visible - NOT what you want here. In fact, the only reason setInterval is broken is because of the throttling of setTimeout/setInterval to 1Hz when the page is not in focus. I raised fixing that (or more to the point, enabling a consistent setInterval somehow), but it's not there yet. At any rate, rAF has particular semantics that are not desirable here.

toyoshim commented 8 years ago

@cwilso Web Bluetooth seems to expose low level packet transmission as is. BLE defines two kind of write operation for the Characteristic, one is with response and the other is without response. Usually maximum packet size is 20B or something. Allowed operations, maximum packet size, and so on are declared through the GATT service, characteristic by characteristic.

BluetoothRemoteGATTCharacteristic having Promise<void> writeValue(BufferSource value); is a straight-forward design that just exposes the underlying BLE API, and other well-known operating systems also expose similar asynchronous APIs. So, I feel it could be reasonable. I'm not sure if this is useful to understand this, but here is one native application code to use BLE.

FYI, BLE MIDI spec requires write without response for MIDI OUT. So, my explanation can not answer your question though. MIDI IN uses notify. So, backpressure is not needed like Web MIDI.

USB also depends on underlying packet based protocol, but usually operating systems does not expose it as far as I know (but I didn't check very well). E.g., Linux USB driver is written by using traditional read/write like blocking APIs like this. Exposing underlying blocking API to JavaScript as an asynchronous one is common challenge in the web standards. And I think WebUSB probably should have Streams for interrupt, bulk, and isochronous transfers because these are usually exposed as BiDi channels.

I'm strongly object to have a Streams for MIDIInput. That is useless and just makes browser implementation difficult. Also as Adam said, it does not fit to the broadcast messages.

But I'm neutral for MIDIOutput at this point. There would be pros and cons.

re: rAF I assume that application stop playing on the background. Probably timer throttling could be a topic of User Agent Intervention that we already did. But it might be something that may require user's permission?

agoode commented 8 years ago

@cwilso Sorry I was a little unclear by "we" here. I agree Web MIDI should not provide a sequencer. But Web MIDI (or the Web in general) should provide some mechanism for implementing one. There are various partial solutions like setInterval or rAF, but they are not quite right as you point out.

What is the Web Audio mechanism? Do you mean AudioProcessingEvent? That is good for periodic data at a high rate, but MIDI data is aperiodic.

toyoshim commented 8 years ago

Why throttled 1 sec interval timer does not work for you? You should be able to queue next sending messages for the next 1 sec with proper timestamps. I don't think you need to adjust playback speed very quickly in a background tab, right?

agoode commented 8 years ago

1 second is probably ok, though I think you might get an audible glitch during the initial transition from 60Hz to 1Hz, when your buffer would empty early?

notator commented 8 years ago

Why throttled 1 sec interval timer does not work for you?

My sequencer-like application sends timestamped midi data at irregular intervals of less than a second. When the tab goes into the background, all the messages that are timestamped within one second are played at the same time: At one second intervals. That's just ugly. In order to avoid this problem, the application has to stop itself playing when its tab goes into the background. It would obviously be better if the app could continue to play normally even when its tab is not at the front.

toyoshim commented 8 years ago

@agoode Ah, that's good point. I didn't investigate if there is a smart way to detect the timing to go background and change algorithm to play smoothly in JS without any glitch. But, if we didn't have, exactly WICG must be a right place to discuss to have a way. Now there have issues that discuss the timer throttling.

@notator You should be able to use fine-grained timestamp even if you are using 1sec interval timer. Originally the timestamp is introduced for such purpose, doing accurate playback in coarse-grained execution timing.

notator commented 8 years ago

You should be able to use fine-grained timestamp even if you are using 1sec interval timer.

Can you point me at an example of how to do that? (EDIT: I probably misunderstood you there. Probably you meant that this is precisely what you are working on.)

BTW: like @bome, I'm also very interested in backpressure and overflowing buffers, so am following this thread with great interest!

toyoshim commented 8 years ago

Concept is something like this.

// Sending data in every 500ms within 1sec timer. setInterval(() => { const now = performance.now(); out.send(data, now + 0); out.send(data, now + 500); }, 1000);

notator commented 8 years ago

Ach soooo... Thanks. But the details of doing that are not going to be easy. It would be much nicer if the problem could be solved in the browser once and for all, rather than having to be done by every app developer. Watching this space. Sorry for the interruption.

cwilso commented 8 years ago

I think James is using my polyfill, which uses setTimeout directly (i.e. I always intended it to be a stopgap measure, and did not implement the async scheduled send in a way that is resilient to the tab going into the background). The pattern of fine-grained timestamps is correct, but the polyfill won't send them correctly in time if the tab goes into the background. Chrome's native implementation of Web MIDI will.

It's quite possible to do this, today - setTimeout and setInterval are not throttled to 1Hz in Worker threads, only in the main thread. So you can use a worker thread to send yourself a timer message, and it will be consistent. (In fact, this is implemented in my metronome code at https://github.com/cwilso/metronome.) I should probably fix up the polyfill.

The throttling was originally implemented by browsers to defeat naive use of setTimeout for animation, which was causing horrendous power usage (animation would happen even when offscreen, in short). Although this was done for good reasons, this particular case - where a developer really DOES want a timer that occurs at >1Hz even while the tab is blurred - is a pain, since you have to use a separate thread. I'd proposed that we should have a setTimer/setInterval switch that says "I know what I'm doing, don't throttle me", but the use case is pretty narrow.

Incidentally, you can smoothly detect when the system switches between full-resolution setTimeout and throttled setTimeout - just watch for the window.onblur and window.onfocus events. Of course, when the window comes back into focus, you may have up to a second already scheduled, and you can't get rid of it.

setInterval is the right solution here, it just needs to be able to disable the hack browsers put in to fix the naive animation power drain.

On Fri, Feb 19, 2016 at 7:18 AM, James Ingram notifications@github.com wrote:

Ach soooo... Thanks. But the details of doing that are not going to be easy. It would be much nicer if the problem could be solved in the browser once and for all, rather than having to be done by every app developer. Watching this space. Sorry for the interruption.

— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-midi-api/issues/158#issuecomment-186209330 .

notator commented 8 years ago

@cwilso

I think James is using my polyfill, which uses setTimeout directly (i.e. I always intended it to be a stopgap measure, and did not implement the async scheduled send in a way that is resilient to the tab going into the background). The pattern of fine-grained timestamps is correct, but the polyfill won't send them correctly in time if the tab goes into the background. Chrome's native implementation of Web MIDI will.

I'm using your polyfill for Firefox+Jazz, but Chrome's native implementation in Chrome. I just checked that this really is so by putting a breakpoint on the polyfill in both browsers.

Chrome's native implementation of Web MIDI will.

You mean in future, right?

However, my app's "simple playback" code is still using a tick() function [1] that hasn't changed much since you practically wrote it for me in 2012. The function has a single call to setTimeOut, and does not use setInterval. It currently runs in the browser thread, but

setTimeout and setInterval are not throttled to 1Hz in Worker threads, only in the main thread.

makes the solution look much closer. :-)

It seems that all I have to do is create a Worker thread that handles the performance timing and message sending.

My equivalent "assisted performance" code [2] marshalls midi messages that are prepared in Worker threads, but sends them in the main thread when the user hits the proper key. Presumably I should also send those (marshalled) messages in a Worker thread, even though there's no user interaction when the tab is in the background. (The triggered events can be very long...)

Am I right in thinking that I needn't bother implementing these extra Workers, because the Web MIDI API is shortly going to provide alternatives that will keep my code simpler?

[1] in https://github.com/notator/assistant-performer/blob/master/ap/Sequence.js [2] in https://github.com/notator/assistant-performer/blob/master/ap/Keyboard1.js

cwilso commented 8 years ago

@notator

Chrome's native implementation of Web MIDI will. You mean in future, right?

No, it should now. There's no reason for the internal timestamping to be throttled. I haven't personally tested this, but if it's not working it should get filed as a bug.

It seems that all I have to do is create a Worker thread that handles the performance timing and message sending.

Actually, if you look at the metronome code I pointed at before, it has a nice module that does precisely that (sends a "tick" reliably to the main thread by using a worker thread setInterval).

notator commented 8 years ago

@cwilso I mentioned the problem a month ago at https://bugs.chromium.org/p/chromium/issues/detail?id=497573#c9 but @toyoshim said

that's expected Chrome behavior. Chrome throttles inactive tabs' timer invocations. It isn't Web MIDI specific something.

and

Recommended solution is to stop playing when your tag is in background.

Which is why my app currently does just that. (Are we somehow talking at cross purposes?)

Liked your metronome code a lot! I'll be implementing something along those lines if a simpler solution doesn't appear in the next few months...

This "background tab" issue seems unrelated to the backpressure/overflowing buffers problem, but maybe that's not the case... My app can throw lots of data at the browser in a very short time, so I'm very interested to see how that situation can be controlled. In particular, I need to know when I start sending messages that are not going to get delivered.

ryoyakawai commented 8 years ago

@notator Why do you need that browser's background tabs run in same priority as the foreground tab? I am just curious because I usually design and implement applications which run everything within one tab (some times in several windows.)

I guess the one of the reasons why Chromium/Chrome force background tabs to be lower priority is because to make less consumptions of the hardware resources(Battery, CPU, Memory, etc) as much as possible. And I think to make decision of browser's priorities is the browser's policy, so it is not good idea that those priority is controllable by each API's issues, I think.

cwilso commented 8 years ago

All modern browsers throttle setInterval/setTimeout to 1Hz because a lot of developers used them to do animation at 60Hz, and didn't bother to shut them down when the tab when out of focus. It caused amazing background-tab battery drain.

notator commented 8 years ago

@ryoyakawai This actually has a fairly low priority for my app, but I think it would be more web-consistent if browsers didn't throttle Web MIDI apps when they are running in a background tab.

Apps that produce sound, but don't use setInterval/setTimeout -- such as Flash players, YouTube videos, html <audio> tag (? I haven't tested that) etc. don't get throttled or stopped when they run in the background. And Web MIDI apps get throttled when running in a background tab but not in a background window. That's going to confuse users unless the app programmer goes to the extra trouble of remedying the problem with a solution like Chris' metronome. It would be much better if Web MIDI apps behaved web-consistently by default.

Maybe we simply need a Web MIDI replacement for setTimeout that doesn't get throttled? I'm not currently using setInterval, and don't think I'd ever need to if we had an unthrottled replacement for setTimeout.

By the way: I said

My app can throw lots of data at the browser in a very short time, so I'm very interested to see how that situation can be controlled. In particular, I need to know when I start sending messages that are not going to get delivered.

More precisely: I need to know before I send a message whether its going to be delivered or not.

EDIT: It looks as if we have two different issues here: unthrottled setTimeout and the "mechanism for backpressure" that involves send. Maybe these issues are related after all?

toyoshim commented 8 years ago

Browsers introduced such throttling to protect users from selfish applications. Also, timer throttling is not breaking compatibility, but allowed by spec AFAIK. Adding unlimited version of setTimeout/Interval would be considerable, but it might be something allowed under an user's permission. Anyway, I think it's out of scope in this thread and WG, and should be discussed in the right place, WICG. (Issue list)

Web MIDI's timestamps should not be throttled as Chris said, and it works or worked on all platforms. But there were big changes on thread and task schedulers inside Chrome. So, it's possible that there is a platform dependent regression. If it does not work as intended on your platform, please file a bug with a simple sample code that reproduce the issue from crbug.com.

notator commented 8 years ago

In spite of what toyoshim just said, I'd like to record a thought I had this morning. Maybe there's a mistake in there somewhere, but anyway:

If we had a new, (non-throttling) setTimeout in the Web MIDI API (setMidiTimeout), then it could be designed to solve the backpressure issue as well. It could return the size of the currently available buffer. If I'm worried that my messages are going to overflow the buffer, I could then do something like this:

var remainingBufferSpace = setMidiTimeout(undefined, 0, 0);
var message = nextMessage(); // the next message that tick() is going to send
var nextMessageLength = length(message);
while(thereAreStillMessagesToBeSent)
{
    if(remainingBufferSpace > nextMessageLength)
    {
        // send the message (at least load it into the buffer successfully)
        remainingBufferSpace = setMidiTimeout(tick, delay, nextMessageLength);
        message = nextMessage();
        nextMessageLength = length(message);
    }
    else
    {
        ...
    }
}

And there would be no need to change the way send behaves. EDIT1: fixed a bug in the above code. EDIT2: added a third argument to setMidiTimeout, so that it can subtract the value from its current buffer size and return immediately.