Closed jdsmith3000 closed 9 years ago
There's some interaction on ABSN.buffer that causes problems (what happens if you've scheduled a start, if it's playing, if you set buffer to something else while it's playing, etc.) that we need to define better.
ConvolverNode.buffer and WaveShaperNode.curve should be fine to be null; Chrome should fix those.
ABSN definition issue is https://github.com/WebAudio/web-audio-api/issues/288.
Won't setting ConvolverNode.buffer and WaveShaperNode.curve to null also suffer from the same issue as for ABSN? That is, you don't know exactly when the buffer is actually set.
In chrome, WaveShaperNode.curve = null seems to work. ConvolverNode.buffer = null won't change the buffer.
Resolution: these values will be nullable in an IDL sense, but can only assume the value of null in their initialized state. Setting these values to null after they assume a non-null value (by having a buffer or curve assigned) will throw an InvalidStateError.
Yeah so as I mentioned during the f2f, this would mean this API is the only one on the Web Platform that is not allowing setting a nullable
to null
. I think it would be better for consistency to keep doing something in line with what other all other APIs are doing: allow setting null all the time, allow setting to a buffer only once.
@padenot What would you propose as the behavior when an ABSN/Convolver/WaveShaper that has a buffer, then has the buffer property set to null? This behavior seems almost as problematic as the behavior of allowing the buffer to be changed to a different one.
We had said we would allow setting the nullable
to null
-- if it already was null. That's unusual, but I don't see this as somehow changing the definition of nullable
, which is not just about setter behavior: it's also about getter behavior, and the fact that the attribute can assume the value null.
Quoting https://heycam.github.io/webidl/#idl-nullable-type
A nullable type is an IDL type constructed from an existing type (called the inner type), which just allows the additional value null to be a member of its set of values.
nullable
means null
is a valid value, it can be set and get. I find it problematic to have the same pattern in all the different APIs have a different behavior. Granted, it would have been better to have a setter function here, but it's too late for that.
Let's say you had a signed int
attribute of some interface that assumed the special value -1 prior to being set (to indicate a lack of initialization), but which only allowed positive values to be set and otherwise threw an exception. Would this be a problem?
So I guess I still don't find this problematic but at this point I'd like to invite others to comment...
Let's say you had a signed int attribute of some interface that assumed the special value -1 prior to being set (to indicate a lack of initialization), but which only allowed positive values to be set and otherwise threw an exception. Would this be a problem?
This is not a good analogy, since nullable expresses something different than "this value may not be initialized by default." nullable means that the special null object (that is not of the WebIDL type of the attribute) can be assigned to it. You can think of it as a special way to relax the type checks performed before invoking a setter. It has nothing to do with how/if the attribute in question is initialized or not.
I'm reading http://www.w3.org/TR/WebIDL/#idl-nullable-type and I don't see that its definition has anything to do with assignment per se. It defines nullable in terms of the set of allowed values that the type can assume.
The exception that would be thrown here is not because null
is a disallowed value for the type. It is because null
violates a state-based constraint on the attribute, a constraint that has nothing to do with type checking.
I'm reading http://www.w3.org/TR/WebIDL/#idl-nullable-type and I don't see that its definition has anything to do with assignment per se. It defines nullable in terms of the set of allowed values that the type can assume.
Yes. I talked about assignment since that is the question here. Same applies for reading the attribute.
The exception that would be thrown here is not because null is a disallowed value for the type. It is because null violates a state-based constraint on the attribute, a constraint that has nothing to do with type checking.
I understand, and I am arguing that is a bad choice. AudioBuffers are objects that can keep a potentially large buffer alive. It is a feature of the API that it lets you break references to an AudioBuffer object so that the GC can free up the resources referenced. Everywhere else in the platform (at least anywhere else I can think of), if you have a nullable attribute, you can set it to null. The proposal under consideration here is to disallow that only if the previous value of the attribute is not null. That results in an API that is inconsistent with most if not all other APIs in the platform, which is not great.
Also, please note that supporting a consistent API here is essentially a matter of specifying that setting the buffer attribute to null when a previous buffer has started playback is the same as calling stop(0) on it. Is there a downside to that?
I'll suggest again that we could simply say that assignments to .buffer (and the like) would simply take effect at the next scheduled audio frame. We've already handled the direct memory access issue, so this should be straightforward.
@cwilso What would the effect be on all the other settings that have interplay with buffer contents, e.g. loop points for ABSN, normalize for WaveShaper, etc.? I'm not saying this is impossible, but this is the reason that people were trying to dodge the problem of changing or nulling buffers. If we can define all the followon effects of buffer change, would that solve this problem from the rest of the group's point of view?
I don't think this is ready for editing. I'm not even sure what the final conclusion here is.
OK, we'll discuss (and have marked for verbal discussion now, rather than edit-readiness).
My current proposal (captured, but not clearly, in my comment to @cwilso) is that we find all the places where this would have an effect and specify that they take effect at the next audio frame. This is also in line with @ehsan's proposal that nulling an ABSN's buffer is the same as calling stop(0).
@cwilso proposes: all 3 nodes (Convolver, WaveShaper and ABSN) all output silence once buffer or curve is set to null. Once this is done, the node might become inactive and be GCed.
To be clear, I pointed out that just saying "resets to initial state" sounds like it's reusable. My preference would still be that we just make changing the .buffer to something else predictable.
@cwilso Yes -- I agree that setting to null should not result in an ABSN that can then be reused (in this respect it is similar to @ehsan's suggestion that nulling .buffer be equivalent to .stop(0)). For Convolver/WaveShaper I'm not sure that any reusability problem exists, because these nodes aren't one-shot, single-life-cycle nodes in the same way as an ABSN.
We did not reach any concensus on what would happen if a live ABSN's .buffer attribute was set to a new buffer. My opinion is that this should either nuke the ABSN similar to nulling it or throw an exception.
Here again, for Convolver/WaveShaper, changing the buffer or curve in a live fashion seems easy to define if we stipulate that changes don't take place until the next audio thread render quantum.
Say you have a convolver that basically adds a 1 minute delay. You change the buffer now to a very short buffer. What happens to the samples that were supposed to be delayed by the original buffer? Discarded? Magically merged in? Undefined? Something else?
I don't see any real problem with the waveshaper, even with oversampling, since there's no memory within the curve itself. (Any memory is caused the oversampling.)
Great question. From an app dev perspective, it seems best to retain any accumulated convolution results from the previous waveform, and sum these with the results from this point forward: what is being changed is the transformation applied to the present input, not the previous transformation derived from past inputs.
I am not considering the difficulty of managing a variable-length convolution tail buffer, but this behavior makes intuitive sense to me and I can't think of a case where someone would want something different. But if someone wanted the results completely discarded for some reason, it would be easy enough to swap in a brand new convolver and toss the current one away.
Note that a node-based solution to the typical retained-results case would be much messier, since developers would have to keep an arbitrary number of old convolvers alive in the graph and mix their outputs together.
On Fri, 24 Jul 2015 06:10:31 -0700, Joe Berkovitz wrote:
Great question. From an app dev perspective, it seems best to retain any accumulated convolution results from the previous waveform, and sum these with the results from this point forward: what is being changed is the transformation applied to the present input, not the previous transformation derived from past inputs.
Although this seems a reasonable expectation from the mathematics (even though it is different from DelayNode), I don't think it is useful enough to require support for changing buffers.
Clients would usually need to cross fade in order to avoid discontinuities, and so two convolvers would be required anyway.
Note that a node-based solution to the typical retained-results case would be much messier, since developers would have to keep an arbitrary number of old convolvers alive in the graph and mix their outputs together.
Developers would need to keep a reference to the input in order to connect new convolvers and cross-fade outputs, but they wouldn't need to explicitly keep the old convolvers alive. It is the responsibility of implementations to keep nodes alive (or at least maintain their effects) until they are no longer observable.
The Web Audio API can provide the tools necessary for clients to build useful libraries, which may well change over time. The API already provides the tools (GainNode) required for cross-fading and so doesn't need a larger surface area through building cross-fading into ConvolverNode.
On Sun, Jul 26, 2015 at 8:59 PM, Karl Tomlinson notifications@github.com wrote:
On Fri, 24 Jul 2015 06:10:31 -0700, Joe Berkovitz wrote:
Great question. From an app dev perspective, it seems best to retain any accumulated convolution results from the previous waveform, and sum these with the results from this point forward: what is being changed is the transformation applied to the present input, not the previous transformation derived from past inputs.
Although this seems a reasonable expectation from the mathematics (even though it is different from DelayNode), I don't think it is useful enough to require support for changing buffers.
Yes. I think this would be particularly difficult in Chrome which breaks a long convolution into many smaller ones (using linearity) and then uses multiple threads to compute the many smaller ones. This effectively keeps the old convolution around while doing the new one. (I don't actually know what happens in Chrome today when buffers are changed. Hongchan tells me it sounds ok.)
If that's the case, might as well do it with separate nodes. Now that we have selective disconnect, it's much easier to disconnect a node from the graph.
Clients would usually need to cross fade in order to avoid discontinuities, and so two convolvers would be required anyway.
Note that a node-based solution to the typical retained-results case would be much messier, since developers would have to keep an arbitrary number of old convolvers alive in the graph and mix their outputs together.
Developers would need to keep a reference to the input in order to connect new convolvers and cross-fade outputs, but they wouldn't need to explicitly keep the old convolvers alive. It is the responsibility of implementations to keep nodes alive (or at least maintain their effects) until they are no longer observable.
The Web Audio API can provide the tools necessary for clients to build useful libraries, which may well change over time. The API already provides the tools (GainNode) required for cross-fading and so doesn't need a larger surface area through building cross-fading into ConvolverNode.
— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/436#issuecomment-125079370 .
Ray
Yeah, it sounds like the "high road" I was suggesting was 1) not that high, and 2) not very practical.
In which case, resetting the buffer should do what? Perhaps the convolver just tosses its entire state and starts over again as if it had just been created.
Joe Berkovitz writes:
In which case, resetting the buffer should do what? Perhaps the convolver just tosses its entire state and starts over again as if it had just been created.
I suspect that having the state reset as you describe would be fine and I suspect that this is what implementations do.
Making ConvolverNode throw when an attempt is made to set a second non-null buffer, making it a kind of one-shot node, is probably also sane IMO, but there may be some backward compat issues.
I don't have a preference.
To discuss further at TPAC
TPAC: All 3 nodes (Convolver, WaveShaper and ABSN) all output silence once buffer or curve is set to null. Once this is done, the node will become inactive and can be GCed if other references don't exist. It's not OK to set another buffer or curve on the same node: this will throw an error.
We are tabling the node-reuse question to be raised in a separate issue.
On Sun, 25 Oct 2015 23:16:12 -0700, Joe Berkovitz wrote:
TPAC: All 3 nodes (Convolver, WaveShaper and ABSN) all output silence once buffer or curve is set to null. Once this is done, the node will become inactive and can be GCed if other references don't exist. It's not OK to set another buffer or curve on the same node: this will throw an error.
We are tabling the node-reuse question to be raised in a separate issue.
If it is not OK to set another buffer or curve on the same node, then doesn't that completely rule out re-use?
What node-reuse question remains?
There is no specific reuse scenario at this point that is still allowed AFAIK. I was just referring to the general topic of "should nodes be reusable?".
Web Audio has several nullable attributes. However, some implementations throw exceptions when you set them to null. The W3C spec should specify what happens when setting null values or identify the exceptions. The nullable attributes include: • AudioBufferSourceNode::buffer • ConvolverNode::buffer • WaveShaperNode::curve
Proposed Solution: Setting null should put these nodes back into their initial state. For example, the AudioBufferSourceNode outputs silence when it has a null buffer.