netabecker / Stegastamp_pytorch_version

12 stars 8 forks source link

Issue with decipher_indicator #16

Open SVithurabiman opened 2 months ago

SVithurabiman commented 2 months ago

Hi @netabecker many thanks for your work. I have been trying to train the model. But I receive the following results image image

As you can see the decipher_indicator is always 0 and I also have tried it with some other seeds for numpy as you mentioned in another issue. I also face the issue where I cannot decode the message. Your assistance in this regards would be much appreciated. Thanks.

SVithurabiman commented 2 months ago

After 3 days of trying I managed to use seed 18 for numpy and 0 for torch. I managed to reproduce soemthing closer to the results but I get a pixelated effect in the middle of the image. I have included an image for reference. image

@netabecker your assistance in providing some insights on why is this happening would be much appreciated

netabecker commented 2 months ago

It looks good! Are you able to decode the messages in the model's current state? If so - perhaps training it for a bit longer might do the trick. It looks like you're also experiencing some 'overflowing' (the bright green pixels in the frame of the residual image). I would try clipping the values between the encoder and the decoder, it should help avoid it and will hopefully help you get a better encoding result

SVithurabiman commented 2 months ago

Yes I am able to decode it. I changed the border to while while training which may have resulted the border, however my issue is the pixelated effect in the center of the image. I also noticed that line 99 in train.py is commented and line 100 is used in the training process. However, in the original implementation in TF1 the lpips_loss_scale & G_loss_scale is used in loss_scales. Could you please share the reasoning behind passing 0 for them in your code?