MRzzm / DINet

The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."
895 stars 167 forks source link

改用chinese-hubert-large来生成音频特征 #95

Closed tailangjun closed 2 months ago

tailangjun commented 4 months ago

改用 chinese-hubert-large来生成音频特征,这个方向是否有哪位大佬走通了 我试了几个 wav,提取音频特征 hubert_shape和 deepspeech_shape后打印如下:

basename:20100219005015_s0410d04p0_f135h243_c71o1.wav, hubert_shape:(1, 438, 1024), deepspeech_shape:(219, 29)
basename:20100217002818_s1418d05p0_f130h235_c75o0.wav, hubert_shape:(1, 466, 1024), deepspeech_shape:(234, 29)
basename:20100219005015_s0405d05p0_f135h243_c71o1.wav, hubert_shape:(1, 438, 1024), deepspeech_shape:(219, 29)
basename:20100220004137_s0024d02p0_f131h235_c74o0.wav, hubert_shape:(1, 310, 1024), deepspeech_shape:(155, 29)
basename:20100217002818_s0043d03p0_f127h228_c69o0.wav, hubert_shape:(1, 492, 1024), deepspeech_shape:(247, 29)

发现他们的维度不一样,如何解决这个问题呢?直接在 deep_speech.py的基础上改不知道是否可行

ziyichen-paii commented 3 months ago

Hubert has a different fps than deepspeech, roughly 2* fps of deepspeech. You may need to adjust the audio feature data preparation to store the aligned audio feature. Additionally, the audio encoder in syncnet and DInet may also be modified. The channel dim difference is easy to handle, there is an audio channel parameter in config.

tailangjun commented 3 months ago

Hubert has a different fps than deepspeech, roughly 2* fps of deepspeech. You may need to adjust the audio feature data preparation to store the aligned audio feature. Additionally, the audio encoder in syncnet and DInet may also be modified. The channel dim difference is easy to handle, there is an audio channel parameter in config.

Thanks very much.

lililuya commented 3 months ago

改用 chinese-hubert-large来生成音频特征,这个方向是否有哪位大佬走通了 我试了几个 wav,提取音频特征 hubert_shape和 deepspeech_shape后打印如下:

basename:20100219005015_s0410d04p0_f135h243_c71o1.wav, hubert_shape:(1, 438, 1024), deepspeech_shape:(219, 29)
basename:20100217002818_s1418d05p0_f130h235_c75o0.wav, hubert_shape:(1, 466, 1024), deepspeech_shape:(234, 29)
basename:20100219005015_s0405d05p0_f135h243_c71o1.wav, hubert_shape:(1, 438, 1024), deepspeech_shape:(219, 29)
basename:20100220004137_s0024d02p0_f131h235_c74o0.wav, hubert_shape:(1, 310, 1024), deepspeech_shape:(155, 29)
basename:20100217002818_s0043d03p0_f127h228_c69o0.wav, hubert_shape:(1, 492, 1024), deepspeech_shape:(247, 29)

发现他们的维度不一样,如何解决这个问题呢?直接在 deep_speech.py的基础上改不知道是否可行

Hey, Have you make it by using hubert feature? Looking for you reply!

tailangjun commented 3 months ago

改用 chinese-hubert-large来生成音频特征,这个方向是否有哪位大佬走通了 我试了几个 wav,提取音频特征 hubert_shape和 deepspeech_shape后打印如下:

basename:20100219005015_s0410d04p0_f135h243_c71o1.wav, hubert_shape:(1, 438, 1024), deepspeech_shape:(219, 29)
basename:20100217002818_s1418d05p0_f130h235_c75o0.wav, hubert_shape:(1, 466, 1024), deepspeech_shape:(234, 29)
basename:20100219005015_s0405d05p0_f135h243_c71o1.wav, hubert_shape:(1, 438, 1024), deepspeech_shape:(219, 29)
basename:20100220004137_s0024d02p0_f131h235_c74o0.wav, hubert_shape:(1, 310, 1024), deepspeech_shape:(155, 29)
basename:20100217002818_s0043d03p0_f127h228_c69o0.wav, hubert_shape:(1, 492, 1024), deepspeech_shape:(247, 29)

发现他们的维度不一样,如何解决这个问题呢?直接在 deep_speech.py的基础上改不知道是否可行

Hey, Have you make it by using hubert feature? Looking for you reply!

是我,我按照 ziyichen-paii大佬的提示,把声学模型换成 hubert了。

tailangjun commented 3 months ago

Hubert has a different fps than deepspeech, roughly 2* fps of deepspeech. You may need to adjust the audio feature data preparation to store the aligned audio feature. Additionally, the audio encoder in syncnet and DInet may also be modified. The channel dim difference is easy to handle, there is an audio channel parameter in config.

按照老铁的提示,我对代码做了如下修改,syncnet现在跑起来了

  1. Hubert has a different fps than deepspeech, roughly 2* fps of deepspeech. 可以使用numpy的间隔取值来完成

    CHModel = ChineseHubert(model_path)
    ch_feature = CHModel.compute_audio_feature(audio_path)
    print(ch_feature.shape)
    # (1, 239, 1024)
    print(ch_feature[:, ::2, :].shape)
    # (1, 120, 1024) shape满足要求
  2. You may need to adjust the audio feature data preparation to store the aligned audio feature.

        image_embedding = self.face_encoder(image)
        # image:torch.Size([96, 15, 256, 256]), image_embedding:torch.Size([96, 128, 8, 8])
    
        audio_embedding = self.audio_encoder(audio)
        audio_embedding = audio_embedding.unsqueeze(2).unsqueeze(3).repeat(1,1,image_embedding.size(2),image_embedding.size(3))
        # audio:torch.Size([96, 1024, 9]), audio_embedding:torch.Size([96, 128, 8, 8])
  3. Additionally, the audio encoder in syncnet and DInet may also be modified. 这块我没有修改,我在犹豫是否需要将 kernel_size和padding改大一些

        self.audio_conv = nn.Sequential(
            SameBlock1d(in_channel,128,kernel_size=7,padding=3),
            ResBlock1d(128, 128, 3, 1),
            # 9 → 5
            DownBlock1d(128, 128, 3, 1),
            ResBlock1d(128, 128, 3, 1),
            # 5 → 3
            DownBlock1d(128, 128, 3, 1),
            ResBlock1d(128, 128, 3, 1),
            # 3 → 2
            DownBlock1d(128, 128, 3, 1),
            SameBlock1d(128,out_dim,kernel_size=3,padding=1)
        )
  4. The channel dim difference is easy to handle, there is an audio channel parameter in config.

        self.parser.add_argument('--audio_channel', type=int, default=1024, help='input audio channels')
tailangjun commented 3 months ago

同步一下训练的情况吧,由于我参考了 Wav2lip中 Syncnet的实现,Loss函数采用了 criterionBCE,结果 Loss_Sync在 0.69附近徘徊。这就比较奇怪了,因为这份数据我在训练 Wav2lip时没出现问题(Wav2lip用了20多万条,DINet用了800多条),lr_s = 0.0001和 Wav2lip中也是一样的,不知道是否有兄弟也遇到了这个问题

===> Epoch[1](43/1082):  Loss_Sync: 0.6924471 lr_s = 0.0001000 
===> Epoch[1](44/1082):  Loss_Sync: 0.6971218 lr_s = 0.0001000 
===> Epoch[1](45/1082):  Loss_Sync: 0.6999642 lr_s = 0.0001000 
===> Epoch[1](46/1082):  Loss_Sync: 0.6909024 lr_s = 0.0001000 
===> Epoch[1](47/1082):  Loss_Sync: 0.6918426 lr_s = 0.0001000 
===> Epoch[1](48/1082):  Loss_Sync: 0.6940660 lr_s = 0.0001000 
===> Epoch[1](49/1082):  Loss_Sync: 0.6927173 lr_s = 0.0001000 
===> Epoch[1](50/1082):  Loss_Sync: 0.6904847 lr_s = 0.0001000 
===> Epoch[1](51/1082):  Loss_Sync: 0.6890368 lr_s = 0.0001000 
===> Epoch[1](52/1082):  Loss_Sync: 0.6916773 lr_s = 0.0001000 
===> Epoch[1](53/1082):  Loss_Sync: 0.6919940 lr_s = 0.0001000 
===> Epoch[1](54/1082):  Loss_Sync: 0.7027859 lr_s = 0.0001000 
===> Epoch[1](55/1082):  Loss_Sync: 0.6873051 lr_s = 0.0001000 
===> Epoch[1](56/1082):  Loss_Sync: 0.6963055 lr_s = 0.0001000 
===> Epoch[1](57/1082):  Loss_Sync: 0.6966709 lr_s = 0.0001000 
===> Epoch[1](58/1082):  Loss_Sync: 0.6889315 lr_s = 0.0001000 
===> Epoch[1](59/1082):  Loss_Sync: 0.6978862 lr_s = 0.0001000 
===> Epoch[1](60/1082):  Loss_Sync: 0.7182798 lr_s = 0.0001000 
===> Epoch[1](61/1082):  Loss_Sync: 0.6984298 lr_s = 0.0001000 
===> Epoch[1](62/1082):  Loss_Sync: 0.7000048 lr_s = 0.0001000 
===> Epoch[1](63/1082):  Loss_Sync: 0.6982679 lr_s = 0.0001000 
===> Epoch[1](64/1082):  Loss_Sync: 0.6922650 lr_s = 0.0001000 
===> Epoch[1](65/1082):  Loss_Sync: 0.6933457 lr_s = 0.0001000 
===> Epoch[1](66/1082):  Loss_Sync: 0.6942512 lr_s = 0.0001000 
===> Epoch[1](67/1082):  Loss_Sync: 0.6949882 lr_s = 0.0001000 
===> Epoch[1](68/1082):  Loss_Sync: 0.6964686 lr_s = 0.0001000 
===> Epoch[1](69/1082):  Loss_Sync: 0.6903629 lr_s = 0.0001000 
===> Epoch[1](70/1082):  Loss_Sync: 0.6942266 lr_s = 0.0001000 
===> Epoch[1](71/1082):  Loss_Sync: 0.6895943 lr_s = 0.0001000 
===> Epoch[1](72/1082):  Loss_Sync: 0.6827189 lr_s = 0.0001000 
===> Epoch[1](73/1082):  Loss_Sync: 0.6949496 lr_s = 0.0001000 
===> Epoch[1](74/1082):  Loss_Sync: 0.6965183 lr_s = 0.0001000 
tailangjun commented 2 months ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化

https://github.com/MRzzm/DINet/assets/12316965/636a37d4-f926-4ae2-82e8-a3000300da30

lililuya commented 2 months ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化

Model4_facial_dubbing_add_audio.mp4

Hey, amazing work, do you have some solutions for the color difference in face region cause by mask. Or just modify the dataset more strictly.

tailangjun commented 2 months ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化 Model4_facial_dubbing_add_audio.mp4

Hey, amazing work, do you have some solutions for the color difference in face region cause by mask. Or just modify the dataset more strictly.

我觉得微调可以解决这个问题,如果不想每次都微调的话,可能要丰富数据集,让数据集的ID更多肤色更丰富,我后面打算2000个ID的数据集来重新训练

tianlinzx commented 1 month ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化 Model4_facial_dubbing_add_audio.mp4

Hey, amazing work, do you have some solutions for the color difference in face region cause by mask. Or just modify the dataset more strictly.

我觉得微调可以解决这个问题,如果不想每次都微调的话,可能要丰富数据集,让数据集的ID更多肤色更丰富,我后面打算2000个ID的数据集来重新训练

大佬,有完整的代码可以参考和训练的吗?

tailangjun commented 1 month ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化 Model4_facial_dubbing_add_audio.mp4

Hey, amazing work, do you have some solutions for the color difference in face region cause by mask. Or just modify the dataset more strictly.

我觉得微调可以解决这个问题,如果不想每次都微调的话,可能要丰富数据集,让数据集的ID更多肤色更丰富,我后面打算2000个ID的数据集来重新训练

大佬,有完整的代码可以参考和训练的吗?

可以参考 Wav2lip的 Syncnet

liwang0621 commented 1 month ago

同步一下训练的情况吧,由于我参考了 Wav2lip中 Syncnet的实现,Loss函数采用了 criterionBCE,结果 Loss_Sync在 0.69附近徘徊。这就比较奇怪了,因为这份数据我在训练 Wav2lip时没出现问题(Wav2lip用了20多万条,DINet用了800多条),lr_s = 0.0001和 Wav2lip中也是一样的,不知道是否有兄弟也遇到了这个问题

===> Epoch[1](43/1082):  Loss_Sync: 0.6924471 lr_s = 0.0001000 
===> Epoch[1](44/1082):  Loss_Sync: 0.6971218 lr_s = 0.0001000 
===> Epoch[1](45/1082):  Loss_Sync: 0.6999642 lr_s = 0.0001000 
===> Epoch[1](46/1082):  Loss_Sync: 0.6909024 lr_s = 0.0001000 
===> Epoch[1](47/1082):  Loss_Sync: 0.6918426 lr_s = 0.0001000 
===> Epoch[1](48/1082):  Loss_Sync: 0.6940660 lr_s = 0.0001000 
===> Epoch[1](49/1082):  Loss_Sync: 0.6927173 lr_s = 0.0001000 
===> Epoch[1](50/1082):  Loss_Sync: 0.6904847 lr_s = 0.0001000 
===> Epoch[1](51/1082):  Loss_Sync: 0.6890368 lr_s = 0.0001000 
===> Epoch[1](52/1082):  Loss_Sync: 0.6916773 lr_s = 0.0001000 
===> Epoch[1](53/1082):  Loss_Sync: 0.6919940 lr_s = 0.0001000 
===> Epoch[1](54/1082):  Loss_Sync: 0.7027859 lr_s = 0.0001000 
===> Epoch[1](55/1082):  Loss_Sync: 0.6873051 lr_s = 0.0001000 
===> Epoch[1](56/1082):  Loss_Sync: 0.6963055 lr_s = 0.0001000 
===> Epoch[1](57/1082):  Loss_Sync: 0.6966709 lr_s = 0.0001000 
===> Epoch[1](58/1082):  Loss_Sync: 0.6889315 lr_s = 0.0001000 
===> Epoch[1](59/1082):  Loss_Sync: 0.6978862 lr_s = 0.0001000 
===> Epoch[1](60/1082):  Loss_Sync: 0.7182798 lr_s = 0.0001000 
===> Epoch[1](61/1082):  Loss_Sync: 0.6984298 lr_s = 0.0001000 
===> Epoch[1](62/1082):  Loss_Sync: 0.7000048 lr_s = 0.0001000 
===> Epoch[1](63/1082):  Loss_Sync: 0.6982679 lr_s = 0.0001000 
===> Epoch[1](64/1082):  Loss_Sync: 0.6922650 lr_s = 0.0001000 
===> Epoch[1](65/1082):  Loss_Sync: 0.6933457 lr_s = 0.0001000 
===> Epoch[1](66/1082):  Loss_Sync: 0.6942512 lr_s = 0.0001000 
===> Epoch[1](67/1082):  Loss_Sync: 0.6949882 lr_s = 0.0001000 
===> Epoch[1](68/1082):  Loss_Sync: 0.6964686 lr_s = 0.0001000 
===> Epoch[1](69/1082):  Loss_Sync: 0.6903629 lr_s = 0.0001000 
===> Epoch[1](70/1082):  Loss_Sync: 0.6942266 lr_s = 0.0001000 
===> Epoch[1](71/1082):  Loss_Sync: 0.6895943 lr_s = 0.0001000 
===> Epoch[1](72/1082):  Loss_Sync: 0.6827189 lr_s = 0.0001000 
===> Epoch[1](73/1082):  Loss_Sync: 0.6949496 lr_s = 0.0001000 
===> Epoch[1](74/1082):  Loss_Sync: 0.6965183 lr_s = 0.0001000 

大佬,换成HUBERT后你的synenet 权重用的是哪个,自己重新训了synenet吗

tailangjun commented 4 weeks ago

同步一下训练的情况吧,由于我参考了 Wav2lip中 Syncnet的实现,Loss函数采用了 criterionBCE,结果 Loss_Sync在 0.69附近徘徊。这就比较奇怪了,因为这份数据我在训练 Wav2lip时没出现问题(Wav2lip用了20多万条,DINet用了800多条),lr_s = 0.0001和 Wav2lip中也是一样的,不知道是否有兄弟也遇到了这个问题

===> Epoch[1](43/1082):  Loss_Sync: 0.6924471 lr_s = 0.0001000 
===> Epoch[1](44/1082):  Loss_Sync: 0.6971218 lr_s = 0.0001000 
===> Epoch[1](45/1082):  Loss_Sync: 0.6999642 lr_s = 0.0001000 
===> Epoch[1](46/1082):  Loss_Sync: 0.6909024 lr_s = 0.0001000 
===> Epoch[1](47/1082):  Loss_Sync: 0.6918426 lr_s = 0.0001000 
===> Epoch[1](48/1082):  Loss_Sync: 0.6940660 lr_s = 0.0001000 
===> Epoch[1](49/1082):  Loss_Sync: 0.6927173 lr_s = 0.0001000 
===> Epoch[1](50/1082):  Loss_Sync: 0.6904847 lr_s = 0.0001000 
===> Epoch[1](51/1082):  Loss_Sync: 0.6890368 lr_s = 0.0001000 
===> Epoch[1](52/1082):  Loss_Sync: 0.6916773 lr_s = 0.0001000 
===> Epoch[1](53/1082):  Loss_Sync: 0.6919940 lr_s = 0.0001000 
===> Epoch[1](54/1082):  Loss_Sync: 0.7027859 lr_s = 0.0001000 
===> Epoch[1](55/1082):  Loss_Sync: 0.6873051 lr_s = 0.0001000 
===> Epoch[1](56/1082):  Loss_Sync: 0.6963055 lr_s = 0.0001000 
===> Epoch[1](57/1082):  Loss_Sync: 0.6966709 lr_s = 0.0001000 
===> Epoch[1](58/1082):  Loss_Sync: 0.6889315 lr_s = 0.0001000 
===> Epoch[1](59/1082):  Loss_Sync: 0.6978862 lr_s = 0.0001000 
===> Epoch[1](60/1082):  Loss_Sync: 0.7182798 lr_s = 0.0001000 
===> Epoch[1](61/1082):  Loss_Sync: 0.6984298 lr_s = 0.0001000 
===> Epoch[1](62/1082):  Loss_Sync: 0.7000048 lr_s = 0.0001000 
===> Epoch[1](63/1082):  Loss_Sync: 0.6982679 lr_s = 0.0001000 
===> Epoch[1](64/1082):  Loss_Sync: 0.6922650 lr_s = 0.0001000 
===> Epoch[1](65/1082):  Loss_Sync: 0.6933457 lr_s = 0.0001000 
===> Epoch[1](66/1082):  Loss_Sync: 0.6942512 lr_s = 0.0001000 
===> Epoch[1](67/1082):  Loss_Sync: 0.6949882 lr_s = 0.0001000 
===> Epoch[1](68/1082):  Loss_Sync: 0.6964686 lr_s = 0.0001000 
===> Epoch[1](69/1082):  Loss_Sync: 0.6903629 lr_s = 0.0001000 
===> Epoch[1](70/1082):  Loss_Sync: 0.6942266 lr_s = 0.0001000 
===> Epoch[1](71/1082):  Loss_Sync: 0.6895943 lr_s = 0.0001000 
===> Epoch[1](72/1082):  Loss_Sync: 0.6827189 lr_s = 0.0001000 
===> Epoch[1](73/1082):  Loss_Sync: 0.6949496 lr_s = 0.0001000 
===> Epoch[1](74/1082):  Loss_Sync: 0.6965183 lr_s = 0.0001000 

大佬,换成HUBERT后你的synenet 权重用的是哪个,自己重新训了synenet吗

是的,得重新训练 syncnet

liwang0621 commented 4 weeks ago

同步一下训练的情况吧,由于我参考了 Wav2lip中 Syncnet的实现,Loss函数采用了 criterionBCE,结果 Loss_Sync在 0.69附近徘徊。这就比较奇怪了,因为这份数据我在训练 Wav2lip时没出现问题(Wav2lip用了20多万条,DINet用了800多条),lr_s = 0.0001和 Wav2lip中也是一样的,不知道是否有兄弟也遇到了这个问题

===> Epoch[1](43/1082):  Loss_Sync: 0.6924471 lr_s = 0.0001000 
===> Epoch[1](44/1082):  Loss_Sync: 0.6971218 lr_s = 0.0001000 
===> Epoch[1](45/1082):  Loss_Sync: 0.6999642 lr_s = 0.0001000 
===> Epoch[1](46/1082):  Loss_Sync: 0.6909024 lr_s = 0.0001000 
===> Epoch[1](47/1082):  Loss_Sync: 0.6918426 lr_s = 0.0001000 
===> Epoch[1](48/1082):  Loss_Sync: 0.6940660 lr_s = 0.0001000 
===> Epoch[1](49/1082):  Loss_Sync: 0.6927173 lr_s = 0.0001000 
===> Epoch[1](50/1082):  Loss_Sync: 0.6904847 lr_s = 0.0001000 
===> Epoch[1](51/1082):  Loss_Sync: 0.6890368 lr_s = 0.0001000 
===> Epoch[1](52/1082):  Loss_Sync: 0.6916773 lr_s = 0.0001000 
===> Epoch[1](53/1082):  Loss_Sync: 0.6919940 lr_s = 0.0001000 
===> Epoch[1](54/1082):  Loss_Sync: 0.7027859 lr_s = 0.0001000 
===> Epoch[1](55/1082):  Loss_Sync: 0.6873051 lr_s = 0.0001000 
===> Epoch[1](56/1082):  Loss_Sync: 0.6963055 lr_s = 0.0001000 
===> Epoch[1](57/1082):  Loss_Sync: 0.6966709 lr_s = 0.0001000 
===> Epoch[1](58/1082):  Loss_Sync: 0.6889315 lr_s = 0.0001000 
===> Epoch[1](59/1082):  Loss_Sync: 0.6978862 lr_s = 0.0001000 
===> Epoch[1](60/1082):  Loss_Sync: 0.7182798 lr_s = 0.0001000 
===> Epoch[1](61/1082):  Loss_Sync: 0.6984298 lr_s = 0.0001000 
===> Epoch[1](62/1082):  Loss_Sync: 0.7000048 lr_s = 0.0001000 
===> Epoch[1](63/1082):  Loss_Sync: 0.6982679 lr_s = 0.0001000 
===> Epoch[1](64/1082):  Loss_Sync: 0.6922650 lr_s = 0.0001000 
===> Epoch[1](65/1082):  Loss_Sync: 0.6933457 lr_s = 0.0001000 
===> Epoch[1](66/1082):  Loss_Sync: 0.6942512 lr_s = 0.0001000 
===> Epoch[1](67/1082):  Loss_Sync: 0.6949882 lr_s = 0.0001000 
===> Epoch[1](68/1082):  Loss_Sync: 0.6964686 lr_s = 0.0001000 
===> Epoch[1](69/1082):  Loss_Sync: 0.6903629 lr_s = 0.0001000 
===> Epoch[1](70/1082):  Loss_Sync: 0.6942266 lr_s = 0.0001000 
===> Epoch[1](71/1082):  Loss_Sync: 0.6895943 lr_s = 0.0001000 
===> Epoch[1](72/1082):  Loss_Sync: 0.6827189 lr_s = 0.0001000 
===> Epoch[1](73/1082):  Loss_Sync: 0.6949496 lr_s = 0.0001000 
===> Epoch[1](74/1082):  Loss_Sync: 0.6965183 lr_s = 0.0001000 

大佬,换成HUBERT后你的synenet 权重用的是哪个,自己重新训了synenet吗

是的,得重新训练 syncnet

大佬,再问个问题,我训练到frame-256阶段,发现同步性超级差,嘴巴能动,但是基本没啥同步对应关系,这个问题大佬遇到过吗,有什么建议吗,还有必要把clip那一步继续训练下去吗

lililuya commented 4 weeks ago

同步一下训练的情况吧,由于我参考了 Wav2lip中 Syncnet的实现,Loss函数采用了 criterionBCE,结果 Loss_Sync在 0.69附近徘徊。这就比较奇怪了,因为这份数据我在训练 Wav2lip时没出现问题(Wav2lip用了20多万条,DINet用了800多条),lr_s = 0.0001和 Wav2lip中也是一样的,不知道是否有兄弟也遇到了这个问题

===> Epoch[1](43/1082):  Loss_Sync: 0.6924471 lr_s = 0.0001000 
===> Epoch[1](44/1082):  Loss_Sync: 0.6971218 lr_s = 0.0001000 
===> Epoch[1](45/1082):  Loss_Sync: 0.6999642 lr_s = 0.0001000 
===> Epoch[1](46/1082):  Loss_Sync: 0.6909024 lr_s = 0.0001000 
===> Epoch[1](47/1082):  Loss_Sync: 0.6918426 lr_s = 0.0001000 
===> Epoch[1](48/1082):  Loss_Sync: 0.6940660 lr_s = 0.0001000 
===> Epoch[1](49/1082):  Loss_Sync: 0.6927173 lr_s = 0.0001000 
===> Epoch[1](50/1082):  Loss_Sync: 0.6904847 lr_s = 0.0001000 
===> Epoch[1](51/1082):  Loss_Sync: 0.6890368 lr_s = 0.0001000 
===> Epoch[1](52/1082):  Loss_Sync: 0.6916773 lr_s = 0.0001000 
===> Epoch[1](53/1082):  Loss_Sync: 0.6919940 lr_s = 0.0001000 
===> Epoch[1](54/1082):  Loss_Sync: 0.7027859 lr_s = 0.0001000 
===> Epoch[1](55/1082):  Loss_Sync: 0.6873051 lr_s = 0.0001000 
===> Epoch[1](56/1082):  Loss_Sync: 0.6963055 lr_s = 0.0001000 
===> Epoch[1](57/1082):  Loss_Sync: 0.6966709 lr_s = 0.0001000 
===> Epoch[1](58/1082):  Loss_Sync: 0.6889315 lr_s = 0.0001000 
===> Epoch[1](59/1082):  Loss_Sync: 0.6978862 lr_s = 0.0001000 
===> Epoch[1](60/1082):  Loss_Sync: 0.7182798 lr_s = 0.0001000 
===> Epoch[1](61/1082):  Loss_Sync: 0.6984298 lr_s = 0.0001000 
===> Epoch[1](62/1082):  Loss_Sync: 0.7000048 lr_s = 0.0001000 
===> Epoch[1](63/1082):  Loss_Sync: 0.6982679 lr_s = 0.0001000 
===> Epoch[1](64/1082):  Loss_Sync: 0.6922650 lr_s = 0.0001000 
===> Epoch[1](65/1082):  Loss_Sync: 0.6933457 lr_s = 0.0001000 
===> Epoch[1](66/1082):  Loss_Sync: 0.6942512 lr_s = 0.0001000 
===> Epoch[1](67/1082):  Loss_Sync: 0.6949882 lr_s = 0.0001000 
===> Epoch[1](68/1082):  Loss_Sync: 0.6964686 lr_s = 0.0001000 
===> Epoch[1](69/1082):  Loss_Sync: 0.6903629 lr_s = 0.0001000 
===> Epoch[1](70/1082):  Loss_Sync: 0.6942266 lr_s = 0.0001000 
===> Epoch[1](71/1082):  Loss_Sync: 0.6895943 lr_s = 0.0001000 
===> Epoch[1](72/1082):  Loss_Sync: 0.6827189 lr_s = 0.0001000 
===> Epoch[1](73/1082):  Loss_Sync: 0.6949496 lr_s = 0.0001000 
===> Epoch[1](74/1082):  Loss_Sync: 0.6965183 lr_s = 0.0001000 

大佬,换成HUBERT后你的synenet 权重用的是哪个,自己重新训了synenet吗

是的,得重新训练 syncnet

大佬,再问个问题,我训练到frame-256阶段,发现同步性超级差,嘴巴能动,但是基本没啥同步对应关系,这个问题大佬遇到过吗,有什么建议吗,还有必要把clip那一步继续训练下去吗

frame阶段主要是重建和warp吧,clip阶段才引入了sync_loss

A11enCheung commented 1 week ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化

Model4_facial_dubbing_add_audio.mp4

大佬你好,想问一下DInet训练的部分你有没有修改什么呢?只修改了音频的特征吗?

tailangjun commented 1 week ago

后面 Loss函数改回 MSE,训练了800条10s视频,算是把流程走通了。这是直接推理后的效果,没有微调,说明泛化性还行,但是有些时候抖动有点厉害,还需要优化 Model4_facial_dubbing_add_audio.mp4

大佬你好,想问一下DInet训练的部分你有没有修改什么呢?只修改了音频的特征吗?

可以不做修改

hnsywangxin commented 5 days ago

@tailangjun @ziyichen-paii 请问下,我用hubert替换mel去训练wav2lip,倒是跑通了,但是训练syncnet的时候loss一直在0.69处徘徊,降不下去,用mel是可以降下去的,请问能帮忙看看是哪里的问题吗

1:wav2lip的face encode维度是(8,1024,1,1)的,8表示batch,而我用hubert的特征维度是(8,1024,10)(mel的输入维度是(8,1,80,16),经过卷积后变为(8,1024,1,1),可以正常训练),所以我先用permute先进行维度转换,然后再用conv1d卷积降低最后一个维度,最终得到(8,1024,1,1),代码如下

image

2:audio_encoder的代码如下:

image

感谢

tailangjun commented 4 days ago

@tailangjun @ziyichen-paii 请问下,我用hubert替换mel去训练wav2lip,倒是跑通了,但是训练syncnet的时候loss一直在0.69处徘徊,降不下去,用mel是可以降下去的,请问能帮忙看看是哪里的问题吗

1:wav2lip的face encode维度是(8,1024,1,1)的,8表示batch,而我用hubert的特征维度是(8,1024,10)(mel的输入维度是(8,1,80,16),经过卷积后变为(8,1024,1,1),可以正常训练),所以我先用permute先进行维度转换,然后再用conv1d卷积降低最后一个维度,最终得到(8,1024,1,1),代码如下 image

2:audio_encoder的代码如下: image

感谢

将 audio_encoder加深点试试

hnsywangxin commented 4 days ago

@tailangjun @ziyichen-paii 请问下,我用hubert替换mel去训练wav2lip,倒是跑通了,但是训练syncnet的时候loss一直在0.69处徘徊,降不下去,用mel是可以降下去的,请问能帮忙看看是哪里的问题吗 1:wav2lip的face encode维度是(8,1024,1,1)的,8表示batch,而我用hubert的特征维度是(8,1024,10)(mel的输入维度是(8,1,80,16),经过卷积后变为(8,1024,1,1),可以正常训练),所以我先用permute先进行维度转换,然后再用conv1d卷积降低最后一个维度,最终得到(8,1024,1,1),代码如下 image 2:audio_encoder的代码如下: image 感谢

将 audio_encoder加深点试试

你好,我已经加深了网络,按照DInet的 audio_encoder的网络去设计,还是一样的,改了loss,从bceloss变成mseloss也不下降