lipku / metahuman-stream

Real time interactive streaming digital human
https://zhuanlan.zhihu.com/p/675131165
MIT License
954 stars 217 forks source link

显卡4090d使用musetalk,最高帧率只有15fps,在生成语音后合成时会卡住,内存显存吃满 #103

Closed OneMobPsycho100 closed 4 weeks ago

OneMobPsycho100 commented 1 month ago

启动方式:musetalk,webrtc 启动命令:python app.py --model musetalk --transport webrtc --tts gpt-sovits --TTS_SERVER http://127.0.0.1:5000 --batch_size 16 模型:musetalk示例模型 39c96ed0cc51f74357b8f27fb1f4e284

OneMobPsycho100 commented 1 month ago

卡住是我使用的问题,已经解决了。 但是帧数上限还是只有15fps,输出声音时会卡一下

Kevin746p commented 1 month ago

您好 请问你用musetalk的时候没有遇到FileNotFoundError: [Errno 2] No such file or directory: './data/avatars/avator_1/latents.pt'这个问题吗 atents.pt需要从哪里获取呀

OneMobPsycho100 commented 1 month ago

您好 请问你用musetalk的时候没有遇到FileNotFoundError: [Errno 2] No such file or directory: './data/avatars/avator_1/latents.pt'这个问题吗 atents.pt需要从哪里获取呀

https://github.com/lipku/metahuman-stream?tab=readme-ov-file#39-%E6%A8%A1%E5%9E%8B%E7%94%A8musetalk 下载数字人模型,链接: https://caiyun.139.com/m/i?2eAjs8optksop 提取码:3mkt, 解压后将整个文件夹拷到本项目的data/avatars下

Kevin746p commented 1 month ago

您好 请问你用musetalk的时候没有遇到FileNotFoundError: [Errno 2] No such file or directory: './data/avatars/avator_1/latents.pt'这个问题吗 atents.pt需要从哪里获取呀

https://github.com/lipku/metahuman-stream?tab=readme-ov-file#39-%E6%A8%A1%E5%9E%8B%E7%94%A8musetalk 下载数字人模型,链接: https://caiyun.139.com/m/i?2eAjs8optksop 提取码:3mkt, 解压后将整个文件夹拷到本项目的data/avatars下

谢谢

Kevin746p commented 1 month ago

您好,再请教一下: 这是我启动srs的方式docker run --rm -p 1935:1935 -p 8080:8080 -p 1985:1985 -p 8000:8000/udp registry.cn-hangzhou.aliyuncs.com/ossrs/srs:5 objs/srs -c conf/rtc.conf image

app启动: (nerfstream) PS E:\projects\metahuman-stream> python app.py --model musetalk --transport webrtc add ffmpeg to path Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth cuda start Namespace(pose='data/data_kf.json', au='data/au.csv', torso_imgs='', O=False, data_range=[0, -1], workspace='data/video', seed=0, ckpt='data/pretrained/ngp_kf.pth', num_rays=65536, cuda_ray=False, max_steps=16, num_steps=16, upsample_steps=0, update_extra_interval=16, max_ray_batch=4096, warmup_step=10000, amb_aud_loss=1, amb_eye_loss=1, unc_loss=1, lambda_amb=0.0001, fp16=False, bg_img='white', fbg=False, exp_eye=False, fix_eye=-1, smooth_eye=False, torso_shrink=0.8, color_space='srgb', preload=0, bound=1, scale=4, offset=[0, 0, 0], dt_gamma=0.00390625, min_near=0.05, density_thresh=10, density_thresh_torso=0.01, patch_size=1, init_lips=False, finetune_lips=False, smooth_lips=False, torso=False, head_ckpt='', gui=False, W=450, H=450, radius=3.35, fovy=21.24, max_spp=1, att=2, aud='', emb=False, ind_dim=4, ind_num=10000, ind_dim_torso=8, amb_dim=2, part=False, part2=False, train_camera=False, smooth_path=False, smooth_path_window=7, asr=False, asr_wav='', asr_play=False, asr_model='cpierse/wav2vec2-large-xlsr-53-esperanto', asr_save_feats=False, fps=50, l=10, m=8, r=10, fullbody=False, fullbody_img='data/fullbody/img', fullbody_width=580, fullbody_height=1080, fullbody_offset_x=0, fullbody_offset_y=0, avatar_id='avator_1', bbox_shift=5, batch_size=16, customvideo=False, customvideo_img='data/customvideo/img', customvideo_imgnum=1, tts='edgetts', REF_FILE=None, TTS_SERVER='http://localhost:9000', CHARACTER='test', EMOTION='default', model='musetalk', transport='webrtc', push_url='http://localhost:1985/rtc/v1/whip/?app=live&stream=livestream', listenport=8010) reading images... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1100/1100 [00:06<00:00, 179.71it/s] reading images... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1100/1100 [00:00<00:00, 3849.62it/s] start websocket server 求助问题: 为什么浏览器没有出视频,且websocket对话两次就直接断开了 image

OneMobPsycho100 commented 1 month ago

您好,再请教一下: 这是我启动srs的方式docker run --rm -p 1935:1935 -p 8080:8080 -p 1985:1985 -p 8000:8000/udp registry.cn-hangzhou.aliyuncs.com/ossrs/srs:5 objs/srs -c conf/rtc.conf image

app启动: (nerfstream) PS E:\projects\metahuman-stream> python app.py --model musetalk --transport webrtc add ffmpeg to path Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth cuda start Namespace(pose='data/data_kf.json', au='data/au.csv', torso_imgs='', O=False, data_range=[0, -1], workspace='data/video', seed=0, ckpt='data/pretrained/ngp_kf.pth', num_rays=65536, cuda_ray=False, max_steps=16, num_steps=16, upsample_steps=0, update_extra_interval=16, max_ray_batch=4096, warmup_step=10000, amb_aud_loss=1, amb_eye_loss=1, unc_loss=1, lambda_amb=0.0001, fp16=False, bg_img='white', fbg=False, exp_eye=False, fix_eye=-1, smooth_eye=False, torso_shrink=0.8, color_space='srgb', preload=0, bound=1, scale=4, offset=[0, 0, 0], dt_gamma=0.00390625, min_near=0.05, density_thresh=10, density_thresh_torso=0.01, patch_size=1, init_lips=False, finetune_lips=False, smooth_lips=False, torso=False, head_ckpt='', gui=False, W=450, H=450, radius=3.35, fovy=21.24, max_spp=1, att=2, aud='', emb=False, ind_dim=4, ind_num=10000, ind_dim_torso=8, amb_dim=2, part=False, part2=False, train_camera=False, smooth_path=False, smooth_path_window=7, asr=False, asr_wav='', asr_play=False, asr_model='cpierse/wav2vec2-large-xlsr-53-esperanto', asr_save_feats=False, fps=50, l=10, m=8, r=10, fullbody=False, fullbody_img='data/fullbody/img', fullbody_width=580, fullbody_height=1080, fullbody_offset_x=0, fullbody_offset_y=0, avatar_id='avator_1', bbox_shift=5, batch_size=16, customvideo=False, customvideo_img='data/customvideo/img', customvideo_imgnum=1, tts='edgetts', REF_FILE=None, TTS_SERVER='http://localhost:9000', CHARACTER='test', EMOTION='default', model='musetalk', transport='webrtc', push_url='http://localhost:1985/rtc/v1/whip/?app=live&stream=livestream', listenport=8010) reading images... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1100/1100 [00:06<00:00, 179.71it/s] reading images... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1100/1100 [00:00<00:00, 3849.62it/s] start websocket server 求助问题: 为什么浏览器没有出视频,且websocket对话两次就直接断开了 image

websocket连接报错 修改python/site-packages/flask_sockets.py self.url_map.add(Rule(rule, endpoint=f)) 改成 self.url_map.add(Rule(rule, endpoint=f, websocket=True))

https://github.com/lipku/metahuman-stream/blob/main/assets/faq.md

Kevin746p commented 1 month ago

您好,再请教一下: 这是我启动srs的方式docker run --rm -p 1935:1935 -p 8080:8080 -p 1985:1985 -p 8000:8000/udp registry.cn-hangzhou.aliyuncs.com/ossrs/srs:5 objs/srs -c conf/rtc.conf image app启动: (nerfstream) PS E:\projects\metahuman-stream> python app.py --model musetalk --transport webrtc add ffmpeg to path Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth cuda start Namespace(pose='data/data_kf.json', au='data/au.csv', torso_imgs='', O=False, data_range=[0, -1], workspace='data/video', seed=0, ckpt='data/pretrained/ngp_kf.pth', num_rays=65536, cuda_ray=False, max_steps=16, num_steps=16, upsample_steps=0, update_extra_interval=16, max_ray_batch=4096, warmup_step=10000, amb_aud_loss=1, amb_eye_loss=1, unc_loss=1, lambda_amb=0.0001, fp16=False, bg_img='white', fbg=False, exp_eye=False, fix_eye=-1, smooth_eye=False, torso_shrink=0.8, color_space='srgb', preload=0, bound=1, scale=4, offset=[0, 0, 0], dt_gamma=0.00390625, min_near=0.05, density_thresh=10, density_thresh_torso=0.01, patch_size=1, init_lips=False, finetune_lips=False, smooth_lips=False, torso=False, head_ckpt='', gui=False, W=450, H=450, radius=3.35, fovy=21.24, max_spp=1, att=2, aud='', emb=False, ind_dim=4, ind_num=10000, ind_dim_torso=8, amb_dim=2, part=False, part2=False, train_camera=False, smooth_path=False, smooth_path_window=7, asr=False, asr_wav='', asr_play=False, asr_model='cpierse/wav2vec2-large-xlsr-53-esperanto', asr_save_feats=False, fps=50, l=10, m=8, r=10, fullbody=False, fullbody_img='data/fullbody/img', fullbody_width=580, fullbody_height=1080, fullbody_offset_x=0, fullbody_offset_y=0, avatar_id='avator_1', bbox_shift=5, batch_size=16, customvideo=False, customvideo_img='data/customvideo/img', customvideo_imgnum=1, tts='edgetts', REF_FILE=None, TTS_SERVER='http://localhost:9000', CHARACTER='test', EMOTION='default', model='musetalk', transport='webrtc', push_url='http://localhost:1985/rtc/v1/whip/?app=live&stream=livestream', listenport=8010) reading images... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1100/1100 [00:06<00:00, 179.71it/s] reading images... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1100/1100 [00:00<00:00, 3849.62it/s] start websocket server 求助问题: 为什么浏览器没有出视频,且websocket对话两次就直接断开了 image

websocket连接报错 修改python/site-packages/flask_sockets.py self.url_map.add(Rule(rule, endpoint=f)) 改成 self.url_map.add(Rule(rule, endpoint=f, websocket=True))

https://github.com/lipku/metahuman-stream/blob/main/assets/faq.md

是已经改过了的情况