Closed wallkop closed 6 months ago
Thank you for the interest in my work.
"But I still failed when I tried to make the student model work on the mobile phone, because the mobile device does not have a powerful enough GPU to support the student model."
That's pretty much all this version can do.
"So I have always had an idea, is it possible to use GPU to train a model that can be used without GPU? For example, models like live2d can be run on the mobile side. This can greatly expand the usage scenarios of talking-head-anime."
I don't know. I want to do it too, but I cannot do it yet. I will continue doing research until I find a solution or until other researchers find it first.
hi pkhungurn, first of all I have the highest respect for your work.
In the demo v4 version, I tried to train a student model and tested its effect. In real time, it ran very well. Compared with demo v3, it used less GPU resources and the smoothness was also improved. .
But I still failed when I tried to make the student model work on the mobile phone, because the mobile device does not have a powerful enough GPU to support the student model.
So I have always had an idea, is it possible to use GPU to train a model that can be used without GPU? For example, models like live2d can be run on the mobile side. This can greatly expand the usage scenarios of
talking-head-anime
.I recently used python to save a series of pictures using the
talking-head-anime
model, used them to implement a frame-by-frame animation, and finally ran the process in unity, but the action switching was still not smooth enough and difficult to display. Make some complex movements: such as blinking and shaking your head. So I was wondering, is it possible, like the live2d model, that what we ultimately train is a morpher of the image, and the animation effect is achieved through image deformation.