sh1newu / SepMark

SepMark: Deep Separable Watermarking for Unified Source Tracing and Deepfake Detection
29 stars 6 forks source link

Training on the dataset of coco dataset is not well #6

Closed happyfox-dot closed 4 months ago

happyfox-dot commented 4 months ago

hello, it's truly a great work ! I'm trying to train this method on the dataset of coco , however, the result is not acceptable. here is a part of infomation about cfg file:

 pool_F:
 - Identity()

Adding Identity() to do not modify about the FakeImage.

Epoch 96 : 31
g_loss=0.44432239517380917,error_rate_C=6.592827004219409e-06,error_rate_R=5.2742616033755275e-05,error_rate_F=0.0,psnr=15.220756856701042,ssim=0.30968375824674776,g_loss_on_discriminator=7.985462092146089,g_loss_on_encoder_MSE=0.12028437004059175,g_loss_on_encoder_LPIPS=0.42160171270370483,g_loss_on_decoder_C=8.604880418669715e-05,g_loss_on_decoder_R=0.0015282267387779548,g_loss_on_decoder_F=0.0006577925987930709,d_loss=0.0009559309810010975,
Epoch 97 : 33
g_loss=0.44987347495706775,error_rate_C=1.3185654008438819e-05,error_rate_R=7.91139240506329e-05,error_rate_F=0.0,psnr=15.591733763489541,ssim=0.3230231887177576,g_loss_on_discriminator=8.030771122703069,g_loss_on_encoder_MSE=0.1104285511412198,g_loss_on_encoder_LPIPS=0.4248430544062506,g_loss_on_decoder_C=9.346913402278236e-05,g_loss_on_decoder_R=0.0019349979112938613,g_loss_on_decoder_F=0.00047457496314368483,d_loss=0.00038574148401252547,
Epoch 98 : 33
g_loss=0.463830151512653,error_rate_C=0.0,error_rate_R=0.00011207805907172995,error_rate_F=0.0,psnr=14.800771025162708,ssim=0.3044330907773368,g_loss_on_discriminator=7.994142309019837,g_loss_on_encoder_MSE=0.1326209185998651,g_loss_on_encoder_LPIPS=0.43972447212738325,g_loss_on_decoder_C=9.168346436057664e-05,g_loss_on_decoder_R=0.0018994903878627133,g_loss_on_decoder_F=0.0004193938861872199,d_loss=2.7666932095875197e-05,
Epoch 99 : 33
g_loss=0.4481487391115744,error_rate_C=0.0,error_rate_R=7.91139240506329e-05,error_rate_F=0.0,psnr=15.413647530954096,ssim=0.31335030477258224,g_loss_on_discriminator=8.005382163615167,g_loss_on_encoder_MSE=0.11506895928443232,g_loss_on_encoder_LPIPS=0.4235509732101537,g_loss_on_decoder_C=0.00012377692621180647,g_loss_on_decoder_R=0.0016653091570103093,g_loss_on_decoder_F=0.00067069056838798,d_loss=2.0882688970025038e-05,
Epoch 100 : 34
g_loss=0.4721301239502581,error_rate_C=6.59282700421941e-05,error_rate_R=0.00034282700421940936,error_rate_F=0.0,psnr=15.366133327725567,ssim=0.29823058541817,g_loss_on_discriminator=7.988023413887507,g_loss_on_encoder_MSE=0.11634246571154534,g_loss_on_encoder_LPIPS=0.4462934366509884,g_loss_on_decoder_C=0.00011311929497596677,g_loss_on_decoder_R=0.001419345612484443,g_loss_on_decoder_F=0.0010512039607650117,d_loss=0.00045747594146022596,

Looking forward to your help! Have you trained this method only for embedding watermarking,(set the nosie_F: Identity())?

sh1newu commented 4 months ago

First of all, it seems ill-suited to add Identity() to pool_F, if by this the semi-fragile detector is optimized not to extract the embedded data under Identity(). If you want to train the solely robust branch of SepMark on the dataset of coco, it would be better to modify the architecture to thoroughly remove the semi-fragile branch. Or you can simply freeze the parameters of the semi-fragile detector and adjust its loss weight to zero, in this way, one underlying benefit is you may load our pre-trained parameters successfully. Another point to note is that you should ensure the watermarking model makes stable GAN training.

happyfox-dot commented 4 months ago

Thanks for your help! I would like to try load the pre-trained parameters to continue training on the coco dadaset . Best wishes!