Closed jo-hnny closed 6 years ago
You have a graph that is similar to this:
The Destination Node can take many inputs, which it "combines" in the order they were connected. In your example transitionNode2
is connected after transitionNode1
, so will be rendered on top of transitionNode1
.
As the output of the transitionNode2
path is opaque, it will completely obscure the output of transitionNode1
. Which is why you don't see transitionNode1
's transition.
One way to ensure a particular path is rendered at the destination is to set all other paths to opacity 0.
You can do this with another transition node. See this graph:
One way to implement this:
Here I've patched videocontext with a helper node called
gate
that is used to toggle the opacity of a single input. We just need to ensure the gate is open for the path we want to see, and closed for all other paths
var canvas = document.getElementById("canvas");
class CustomVideoContext extends VideoContext {
gate () {
const transition = this.transition(VideoContext.DEFINITIONS.OPACITY)
transition.openAt = function (time) {
return transition.transitionAt(time, time, 1, 1, 'opacity')
}
transition.closeAt = function (time) {
return transition.transitionAt(time, time, 0, 0, 'opacity')
}
transition.reset = function () {
return transition.clearTransitions()
}
return transition
}
}
window.vc = new CustomVideoContext(canvas);
//Create nodes for processing graph.
var videoNode1 = vc.video("../../assets/introductions-rant.mp4", 0);
var videoNode2 = vc.video("../../assets/introductions-rant.mp4", 10);
var monochromeEffect = vc.effect(VideoContext.DEFINITIONS.MONOCHROME);
var transitionNodeStar = vc.transition(VideoContext.DEFINITIONS.STAR_WIPE);
var transitionNodeDissolve = vc.transition(VideoContext.DEFINITIONS.STATIC_DISSOLVE);
var starGate = vc.gate()
var dissolveGate = vc.gate()
//Connect the graph graph together
videoNode1.connect(monochromeEffect);
// first input of both transitions is the monochrome vidoeo
var MONOCHROME_INPUT = 0
monochromeEffect.connect(transitionNodeStar);
monochromeEffect.connect(transitionNodeDissolve);
// second input of both transitions is the colour video
var COLOUR_INPUT = 1
videoNode2.connect(transitionNodeStar);
videoNode2.connect(transitionNodeDissolve);
// each transition path can be turned to opacity zero with a "gate"
transitionNodeDissolve.connect(dissolveGate);
transitionNodeStar.connect(starGate);
// destination is downstream of the gates
dissolveGate.connect(vc.destination);
starGate.connect(vc.destination);
//Setup the start/stop times for the video sources
videoNode1.start(0);
videoNode1.stop(10);
videoNode2.start(0);
videoNode2.stop(10);
// start with path with star wipe
starGate.openAt(0)
transitionNodeStar.transitionAt(1, 2, MONOCHROME_INPUT, COLOUR_INPUT);
// switch to path with disolve transition
starGate.closeAt(4)
dissolveGate.openAt(4)
transitionNodeDissolve.transitionAt(4,6, COLOUR_INPUT, MONOCHROME_INPUT);
vc.play();
There are other ways to achieve the same effect. eg use a setTimeout
to connect the transition you want to see to the destination just in time. I'm not a fan of this as it's not as accurate and goes against the nice declarative approach. Anyone else used a different solution?
Great question by the way. I remember running into the same issue when I first used the library
@PTaylour thank you ,I want to use it to complete a project that combines video, audio, pictures, text, transitions, and color palette. Now I just started, I feel this library is great, but the documentation is a bit too simple.
Always looking to improve the documentation. Is there anywhere in particular you think it's lacking?
eg would it have helped for a section on the destination
node to have explained how images are layered according to the order they were connected?
@PTaylour Do you think this should be the default behaviour of a transition node? I can't imagine why it would be useful to have this particular type of node rendering an opaque output over other nodes after the transition has finished. Being able to transition to and from a single node is probably a pretty common use case.
@Sacharified I do think it's a little user hostile as things stand. Using the same transition to mix between two input nodes is easily achievable, but using multiple transitions to mix between the same input nodes is where things start to get complicated.
Do you think this should be the default behaviour of a transition node?
do you mean should a transition node output transparency before the transition start time and after the transition end time? Or transition nodes should given the highest "z-index" at transition start time?
const busA = vc.video("pathToVideo");
const busB = vc.video("pathToVideo");
busA.startAt(0);
busB.startAt(0);
const transitionNode = vc.transition(VideoContext.DEFINITIONS.STAR_WIPE);
busA.connect(transitionNode);
busB.connect(transitionNode);
// note the order we connected buses to transition node
const inputA = 0;
const inputB = 1;
const t = 3 // start first transition at time t
transitionNode.transition(t, t + 2, inputA, inputB);
transitionNode.transition(t + 4, t + 5, inputB, inputA);
transitionNode.transition(t + 10, t + 12, inputA, inputB);
transitionNode.connect(vc.destination)
vc.play();
^ this works fine as per the current implementation.
// treat transition nodes as one-shots so that we can use different definitions.
const transitions = [
vc.transition(VideoContext.DEFINITIONS.STAR_WIPE),
vc.transition(VideoContext.DEFINITIONS.A_DIFFERENT_TRANSITION),
vc.transition(VideoContext.DEFINITIONS.ANOTHER_DIFFERENT_TRANSITION)
];
transitions.forEach(transition => {
busA.connect(transition);
busB.connect(transition);
// these will overlay over each other unless gated in the right order
transition.connect(vc.destination);
})
// more than 2. I want to transition from one to any of the others
const bus = [vc.video("foo"), vc.video("foo"), vc.video("foo"), vc.video("foo")]
// to do this I need to do cascading transitions:
/**
*
* bus[0] ---
* |-- transitionA --->|
* bus[1] --- |
|---- transitionC ---> destination
* bus[2] --- |
* |-- transitionB --->|
* bus[3] ---
*/
Could imagine something like web audio's channel merger could solve the second use case. But I wonder if it's more a limitation of having to set up connections independent of the time line. This would all be less of an issue if you could inputNode.connectAtTime(t + 2, transitionA)
@PTaylour As your first example shows, this already works well if you use the node's transition()
method to schedule all transitions, but the limitation is that you may want to use different shaders for each transition.
Perhaps it would be possible to allow the user to specify the shader to use for each call to TransitionNode.transition()
? This would avoid complicating the node tree with timing and "z-indexing" workarounds, but does contravene the current convention of each node having a fixed shader.
Currently, ProcessingNode
's render()
function is where the webgl program to be used for that update is set, and that is called on every update
. It should be possible to modify GraphNode
such that one node can hold a list of shader programs and dynamically choose which one to use at a given time.
I think we should look to alter the way transition nodes can/are used. @Sacharified do you want to start a fresh issue on the subject and we can work on a design?
Closing this for now. We can start a fresh issue to talk design