Open Wendy-Nam opened 1 week ago
I've found a way to make LipSync super accurate, but it's still manual and not automatic. instead of doing anything with RenPy's core, I just generate an image as a sequence of mouth shapes, like an animation, that I can play in sync with a voice. I have a py script that doesn't run in RenPy, it takes a voice file and scans the volume every 50ms and depending on how loud the chunk is, the mouth is shown open or closed. This looks much better than working with sounds like A E I O U, especially if you only have three mouth shapes.
Very interesting. If you don't mind asking, can you share me the library or code?
I sent you the script on discord, check it out
I'm preparing a major update for the program. Previously, I couldn't address the issues because my skills weren't up to par... (+ due to busy academic schedules)
Here are my main priorities:
My first focus is on Ren'Py integration. Once that's resolved, I'll move on to items 2, 3, and 4.
The current lip-sync system is basic, relying on screen pauses and lip changes, which leads to timing issues and interaction problems. I applied quick fixes, but they were superficial and didn't solve the root problem.
To truly fix this, I needed to understand Ren'Py’s core structure, including internal functions like
say
,interaction
, andcontext
. Recently, through other development work, I've gained more knowledge of these areas and feel ready to implement a better solution.A Personal Note
Although game development is just a hobby, I’m committed to improving this tool. The support from users — from some feedback to donations — All of them was new to me and has been incredibly meaningful.
I'll begin testing in November-December, with full improvements planned by January-March. Thank you for your patience, and stay tuned for updates!