Closed Sacharified closed 5 years ago
Hi @Sacharified, I'm just catching up on things post annual leave. I'll get back to you on your question tomorrow.
I think the bit of the createChannelMerger
API I was looking at was that it let you choose how many inputs you require on construction.
(from moz docs)
var ac = new AudioContext();
var merger = ac.createChannelMerger(2);
// Connect the splitter back to the second input of the merger: we
// effectively swap the channels, here, reversing the stereo image.
gainNode.connect(merger, 0, 1);
splitter.connect(merger, 1, 0);
merger.connect(dest);
});
(side note, I'm not a fan of all the positional arguments in the Web Audio API)
The thought I had, I think, was whether we should have a node that is effectively a transition node with n
or Infinity
inputs.
Combined with your idea of swapping out shaders.
// NB made up api with zero thought
const mixer = ctx.mixer()
nodeA.connect(mixer, 0)
nodeB.connect(mixer, 1)
nodeC.connect(mixer, 2)
mixer.transition({ time, from, to, shader })
The other approach is is to stick to the current transition node but make it easier to schedule connections on the timeline.
eg
// pseudo api again
const transition = ctx.transition({ shader })
nodeA.connectAtTime({ node: transition, input: 0, time: 1 })
nodeB.connectAtTime({ node: transition, input: 1, time: 1 })
nodeC.connectAtTime({ node: transition, input: 0, time: 2 })
transition.transition({ time: 1, from: 0, to: 1 })
transition.transition({ time: 2, from: 1, to: 0 })
Making connections act the same as parameters.
but this is a step away from what web audio does. So breaks the API ethos somewhat
Hey @PTaylour, thanks for that.
I don't think it's necessarily a bad thing to diverge from the Web Audio API in this scenario, as this is probably the case where video and audio are most distinct from one another.
In the Channel Merger Node approach, it looks like it would be down to the user to handle the ordering of channels and the timing for changing that ordering, which could make things quite complex for the user.
The transition node which handles infinite inputs seems like the simplest and most declarative approach from an API perspective. Are the positional arguments necessary? The API convention so far seems to be to order inputs by the order in which they are connected.
It would be good to hear your thoughts on how difficult it would be to implement this and whether you anticipate any difficulties.
I agree that the n-input transition node is more user friendly.
The connect
method already supports accepting a target port as the second argument
https://github.com/bbc/VideoContext/blob/develop/src/graphnode.js#L83-L92
So API-wise it looks like it would be pretty easy supporting the position-args, if users prefer to be explicit about which port they are connecting to.
And, as you say
The API convention so far seems to be to order inputs by the order in which they are connected.
so we can support that too.
targetPort seems to directly set the zIndex. This behaviour is all in rendergraph
called from sourceNode
so we've either got to fork that node or have a fork of logic in there.
We should think about how to classify this new node. How it differs from a standard "zIndex" layering node.
We're tidying up open issues and PRs this sprint. Feel free to re-open when necessary :)
Bumping this for further discussion.
This is a follow-up to the discussion started in #102.
When multiple Transition Nodes are siblings in the node graph, the outcome is not intuitive. Only the top sibling's output will ever be seen, and all other transitions will be invisible, because each Transition Node is constantly outputting.
Here's an example of a common use case which is impossible due to this issue:
I have clips A, B, C and I want to transition from A->B and then B->C. This cannot be achieved using a single Transition Node, as each Transition Node only supports 2 inputs. Therefore we have to create 2 Transition Nodes.
The code would look something like this:
The outcome of this is that you see the first transition, transition1, take place as expected. Video A dissolves in to Video B. What you then see is Video B continuing to play, whereas you expect to see the transition from B->C. This is because transition1 is outputting Video B over the top of transition2's output.
I proposed in the other issue that we could allow one Transition Node to utilise multiple shaders, but this still does not solve the issue above. A solution to the issue, however, would negate the need for Transition Nodes to support multiple shaders.
@PTaylour, you had suggested using a similar implementation to the Web Audio API's ChannelMergerNode, Would you care to expand on that?
One solution might be to be able to dynamically set a node's
z-index
, and when a Transition Node is transitioning, move its z-index higher than other sibling Transition Nodes?.