Open issuefiler opened 4 years ago
~Additionally: new Date()
is an allocation (unless optimized out) which should not be done in a Worklet.~
Misread the code, sorry.
This makes sense. We need to look at the performance object to see if we want everything in it (probably not).
Performance
objects can be serialized and dedicated MessagePort
can be created specifically to post result of performance.now()
to AudioWorkletGlobalScope
.
function serializePerformance(o = {}) {
for (const [key, prop] of Object.entries(performance.toJSON())) {
if (typeof prop === 'object' && 'toJSON' in prop) {
o[prop.constructor.name] = {};
for (const [label, entry] of Object.entries(prop.toJSON())) {
o[prop.constructor.name][label] = entry;
}
} else {
o[key] = prop;
}
}
return o;
}
await context.audioWorklet.addModule('audioWorklet.js');
const bypasser = new AudioWorkletNode(context, 'processor', {
processorOptions: serializePerformance()
});
...
constructor(options) {
super(options);
console.log(options.processorOptions.timeOrigin);
}
...
@guest271314
……dedicated
MessagePort
can be created specifically to post result ofperformance.now()
toAudioWorkletGlobalScope
We could do that, but messaging with each other takes extra time. It is the process
function in an AudioWorkletGlobalScope
that knows the most accurate timing to stamp the time on samples.
We could do that, but messaging with each other takes extra time.
Yes, agree; a new message event object needs to be created for each message.
A dedicated AudioWorkletProcessor
can be used just for timing alone.
Suggest including PerformanceObserver
within the scope of this feature request, for the ability to watch for data flowing inside and outside of AudioWorkletGlobalScope
proper, and synchronization, a very useful API to check if an particular event occurred and how long it took or is taking to complete.
Right. For my personal use case, I need to know (more) exactly what time the process
ed samples correspond to, and it turned out just performance.now()
available would not be enough for that as the samples are buffered to be process
ed. We could look for something that’ll provide detailed and reliable information about the timings and more.
We could look for something that’ll provide detailed and reliable information about the timings and more.
How is "reliable information" defined? Testing edge cases can reveal unintended consequences and render what might be considered reliable under one condition unreliable under other conditions.
In general, found at *nix process()
is executed at least 344-346 times per second with latencyHint
set to 1
resulting in 8192 Callback Buffer Size, incrementing by 1
constructor() {
this.processPerSecond = 0;
this.t = 0;
}
process() {
if (currentTime >= this.t + 1) {
console.log(this.processPerSecond, currentTime, currentTime / 60);
this.processPerSecond = 0;
++this.t;
}
}
currentTime
in AudioWorkletProcessor
appears to be same as AudioContext
timer.
I need to know (more) exactly what time the processed samples correspond to
At the above the average 344-346 can be exactly correlelated to any given input sample, where we can also get the process()
call index between 1-346 within 1 second, currentTime
, currentFrame
, and subdivide 344-346 arbitrarily to signal other code to run, e.g., every 11.5 calls, which we can calculate a future currentTime
to execute at the 11.5 time on the timeline using the variables we have and average rate of progress, and draw a video frame from AudioWorkletProcessor
at 30 FPS, barring cache being disabled or other interrupt.
Time is relative to one or more contexts of human activity. Is the use case to catch up or slow down? In essence the timer could be made using a dedicated AudioWorkletProcessor
itself. What is the precise requirement?
Am not certain if any timing value can be relied on as being accurate at the time the value is read within process()
due to process()
continuing to run. By the time the value is read the actual time has already progressed.
Right. For my personal use case, I need to know (more) exactly what time the processed samples correspond to
FWIW, one approach is to set all input data that will be set in process()
in a Map
, for example, in constructor
constructor(options){ this.i = 0; this.n = 0; // the Map is passed within processorOptions from main thread}
when samples are received in a method of AudioWorkletProcessor
this.map.set(this.i, {channel0, channel1 /* , <...channelN> */}); ++this.i;
then in process()
get the specific samples which will be set at the next lines const {channel0, channel1} = this.map.get(this.n); outputChannel0.set(channel0); outputChannel1.set(channel1);
where we can get the value of currentTime
, now we have the sample index this.i
in the Map
and the currentTime
and if required get the currentFrame
https://github.com/guest271314/AudioWorkletStream/blob/master/audioWorklet.js#L142. Here, since no longer need the sample, delete the index from the Map
, this.buffers.delete(this.n); ++this.n; return true;
.
Virtual F2F:
TPAC 2020:
performance.now()
doesn't allow measuring native nodes.Precise timers are still problematic because they allow side channels still. It might be possible to hide this behind COOP/COEP.
This is what https://github.com/w3c/hr-time/issues/89 is about, it seems possible. hr-time
should offer a hook that says "if this document is isolated, timers are precise".
If repeatably feasible, use of high-resolution time values will likely be valuable for X3D Graphics use of Web Audio API for presentation of acoustics. (This is speculative, potential future use case.)
Teleconf: It seems the primary use case (not including https://github.com/WebAudio/web-audio-api-v2/issues/77#issuecomment-796910897), is for performance evaluation. Perhaps this is covered in WebAudio/web-audio-api-v2#40, in which case there may not be a need for this in a worklet.
Leave this open, but reduce priority, pending more use cases that aren't related to performance measurements.
AudioWG virtual F2F:
@juj also asked this in #2527. I think we should get the spec change first since the actual spec work might be very small.
Hi everyone, how is the progress going? We need to know the current render capacity, and a high-precision clock is required.
2023 TPAC Audio WG Discussion: Because this involves the high resolution timer on a high priority thread, each implementer will first have an internal security discussion. The discussion will continue once the implementers are confident in changing the spec and beginning implementation.
I thought the process isolation mechanism addressed high precision timer security issues?
In any case, it is possible today to create a high precision timer by creating an AudioWorklet with a SharedArrayBuffer, in an AudioContext with sampling rate of 192000, and in the process function, just ++ an integer in the SAB. That gives you a somewhat precise 0.666msec granular timer, and that can be used to polyfill performance.now()
in absence of the real thing. Exposing performance.now()
today would be weaker than that.
The need to get performance.now()
into AudioWorklets at least for the Emscripten community is of ergonomy, so that we don't need to create polyfills in different scopes so that wasm compiled code can run in all threads the same.
There are multiple ways this can be implemented in a user-defined manner. I think the simplest way is a webmanifest that Progressive Web Apps and Isolated Web Apps use, e.g., Isolated Web Apps have something like
"permissions_policy": {
"cross-origin-isolated": ["self"],
"direct-sockets": ["self"]
}
where for AudioWorkletGlobalScope
we could do something like
"exposed": {
"performance": true
"atob": true,
"fetch": true
}
Done. That's it. The user defines whether a Web API or global function is exposed, or not.
Because this involves the high resolution timer on a high priority thread, each implementer will first have an internal security discussion. The discussion will continue once the implementers are confident in changing the spec and beginning implementation.
With my Firefox implementer hat on, and after talking internally:
In Firefox, iff if the page is able to instantiate SharedArrayBuffer
(e.g. because the headers have been set when loading the document), then the precision of performance.now()
is 20us, which might well be good enough for our purpose here.
@padenot What would be the next step to make this possible? It looks like a great addition that is reasonably popular.
The desired feature
High resolution time available within
AudioWorkletGlobalScope
.I’d love to see
(self.performance.now() + self.performance.timeOrigin)
instead ofDate.now()
in my processor code, as myprocess
function is called every other third milliseconds at the sample rate of 192000 Hz.Note
Currently there’s no way for my processor to use it.