Closed TigerHH6866 closed 1 month ago
那这么说,即使是4090也不太可能做到实时的
my local gpu is 2060 8G, tooooo slow to use, 3s video cost half an hour 2080ti cloud, 3s video cost about few min
For a new video, most of time was consumped by pre-processing including face detection, face parsing and so on. Generation time could be significantly reduced if you used the same video for different audios by saving those results in advance. Please refer real-time inference scripts.
我本地的gpu是2060 8G,使用起来太慢了,3s视频需要半个小时2080ti云,3s视频需要几分钟
对于一个新视频来说,大部分时间都消耗在人脸检测、人脸解析等预处理上。如果您提前保存这些结果,将相同的视频用于不同的音频,则生成时间可能会显着缩短。请参考实时推理脚本。
“”当 MuseTalk 进行推理时,子线程可以同时将结果传输给用户。生成过程在 NVIDIA Tesla V100 上可以达到 30fps+。“”
请问大佬:子线程的实时输出是视频流么?在哪里可以看到?
我本地的gpu是2060 8G,使用起来太慢了,3s视频需要半个小时2080ti云,3s视频需要几分钟
对于一个新视频来说,大部分时间都消耗在人脸检测、人脸解析等预处理上。如果您提前保存这些结果,将相同的视频用于不同的音频,则生成时间可能会显着缩短。请参考实时推理脚本。
“”当 MuseTalk 进行推理时,子线程可以同时将结果传输给用户。生成过程在 NVIDIA Tesla V100 上可以达到 30fps+。“”
请问大佬:子线程的实时输出是视频流么?在哪里可以看到?
目前输出的是一帧一帧的图片,可以见代码 https://github.com/TMElyralab/MuseTalk/blob/main/scripts/realtime_inference.py#L218
my local gpu is 2060 8G, tooooo slow to use, 3s video cost half an hour 2080ti cloud, 3s video cost about few min
I use Navid 2060Super, 10seconds video use 7Hours!!!!!
my local gpu is 2060 8G, tooooo slow to use, 3s video cost half an hour 2080ti cloud, 3s video cost about few min
I use Navid 2060Super, 10seconds video use 7Hours!!!!!
now i can make it 15min cost(30s video)
@TigerHH6866 May I ask how you did it? it's slowly on my laptop.
gpu on laptop maybe weaker than pc using realtime making avatar will save time then next generation
Yes, won’t the images be saved during real-time generation?
my local gpu is 2060 8G, tooooo slow to use, 3s video cost half an hour 2080ti cloud, 3s video cost about few min