Open QC2168 opened 3 months ago
i belive seperating the software into front and backend is the best bet. implement revideo as a api endpoint and access it from the frontend
My solution is import @revideo/player
in my vue frontend, and use variables
to pass a dataset, and then create a project which use useScene().variables.get('data', '')
to receive the dataset in project code, and use React syntax to arrange media resources.
import { Audio, Img, makeScene2D, Txt } from '@revideo/2d';
import {all, chain, createRef, waitFor, useScene, Reference} from '@revideo/core';
export default makeScene2D(function* (view) {
const input = useScene().variables.get('data', '');
const datastr = input();
const data = JSON.parse(datastr);
const { debug, dialogues } = data;
if (debug) {
console.log('data:', dialogues);
}
for (let dialogue of dialogues) {
const { audio_path, caption_text, image_paths, audio_duration } = dialogue;
const audioRef = createRef<Audio>();
yield view.add(<Audio src={audio_path} play={true} ref={audioRef} />);
const audio = audioRef();
const currentTime = audio.getCurrentTime() || 0;
const total_duration = audio.getDuration() || (audio_duration/1000);
const duration = total_duration - currentTime;
const imagesCount = image_paths.length;
const perImageDuration = duration / imagesCount;
const imageRefs: Reference<Img>[] = [];
image_paths.forEach((source: string, index: number) => {
const imageRef = createRef<Img>();
imageRefs[index] = imageRef;
view.add(<Img src={source} opacity={0} height={'100%'} scale={1.2} ref={imageRef} />);
});
const textRef = createRef<Txt>();
const text_length = caption_text.length;
const bottom_destination = text_length > 55 ? 300 : text_length > 30 ? 400 : 550;
view.add(<Txt fontFamily="'Hiragino Sans GB', 'Microsoft YaHei', 'WenQuanYi Micro Hei', sans-serif" fontSize={64} textWrap={true} fill={'white'} bottom={[0, bottom_destination]} maxWidth={'80%'} opacity={0} ref={textRef}>{caption_text}</Txt>);
const text = textRef();
// ----------------------
const createRandomMovie = (index: number, duration: number) => {
const image = imageRefs[index]();
const random = Math.floor(((index + 1) * imagesCount * duration) % 10);
const actions = [
// () => image.opacity(1, duration), // no change
() => image.offset([-0.05, 0], duration),
() => image.offset([0.05, 0], duration),
() => image.offset([-0.05, 0], duration/2).to([0.05, 0], duration/2),
() => image.offset([0.05, 0], duration/2).to([-0.05, 0], duration/2),
() => chain(image.scale(1.3, 0), image.skew([1, 2], duration/2).to([0, 0], duration/2)),
() => image.scale(1.3, duration),
() => chain(image.scale(1.3, 0), image.scale(1.2, duration)),
() => chain(image.scale(1.3, duration/2), image.scale(1.2, duration/2)),
() => chain(image.scale(1.3, 0), image.offset([0.05, 0.05], duration/2), image.offset([0.1, -0.05], duration/2)),
() => chain(image.scale(1.3, 0), image.offset([-0.05, -0.05], 0), image.offset([0, 0.05], duration/2), image.offset([0.05, -0.05], duration/2)),
];
if (debug) {
console.debug('random:', { random, index, imagesCount, duration });
}
const act = actions[random];
return act();
};
const imagesChain = [];
for (let i = 0; i < imagesCount; i++) {
const imageRef = imageRefs[i];
const image = imageRef();
imagesChain.push(
all(
chain(
image.opacity(1, 0),
waitFor(perImageDuration),
image.opacity(0, 0),
),
createRandomMovie(i, perImageDuration),
)
);
}
yield* all(
chain(text.opacity(1, 0), waitFor(duration), text.opacity(0, 0)),
chain(...imagesChain),
);
}
});
FYI.
To edit the video, you are editing the dataset which passed into variables
. This is my thought.
I tried to import My project, But It not work normaly
I want to write a very mini editor where users only need to pass in text to create video effects
Can it be achieved?