Closed jbilcke-hf closed 3 months ago
Since we have a decent way of generating a video baseline (eg. using Live Portrait directly from Clapper),
we need to take those videos and lip-sync them to the dialogue segments
I propose to run the lipsync workflow in the segment resolver (eg. after generating a video or applying a video face swap)
For instance, this can be done after this block:
https://github.com/jbilcke-hf/clapper/blob/main/packages/app/src/app/api/resolve/route.ts#L164-L203
done
Context
Since we have a decent way of generating a video baseline (eg. using Live Portrait directly from Clapper),
we need to take those videos and lip-sync them to the dialogue segments
Solution
I propose to run the lipsync workflow in the segment resolver (eg. after generating a video or applying a video face swap)
For instance, this can be done after this block:
https://github.com/jbilcke-hf/clapper/blob/main/packages/app/src/app/api/resolve/route.ts#L164-L203