facebookresearch / stable_signature

Official implementation of the paper "The Stable Signature Rooting Watermarks in Latent Diffusion Models"
Other
385 stars 48 forks source link

Please upload the pretrained HiDDeN decoder weights. #5

Closed DI-LEE closed 1 year ago

DI-LEE commented 1 year ago

Hello, I'm DI in Korea univ. I'm very impressed with your research. So I want to reproduce various bits of it, but I failed to pretrain 32 bits HiDDeN decoder. Please upload the pretrained 32 bits decoder's weights or run code please.

Thank you. Best regards DI Lee

pierrefdz commented 1 year ago

Hi, The Hidden models are already uploaded (see the readme.md). The code to run watermark networks is in https://github.com/facebookresearch/stable_signature/tree/main/hidden. Tell me if I'm missing smthing.

DI-LEE commented 1 year ago

Thank you for fast reply. I'm reproducing 32 bits HiDDeN decoder but Its bit accuracy only goes up to 87%. I'm trying to get 100%, Can I have access to the decoder weights used in your code?

pierrefdz commented 1 year ago

I havent used 32bits decoder. The 48bits one is given in the reproduction: https://github.com/facebookresearch/stable_signature/tree/main/hidden/ckpts Or in https://github.com/facebookresearch/stable_signature/tree/main#watermark-models

What is the decoder you are referring to? You can also share the parameters used for your training (the optimization can be a bit unstable depending on them)

DI-LEE commented 1 year ago

Sorry, I mistakenly thought that you guys did the experiment on 32bits. After rechecking your paper, there was no 32 bits decoder.

If you know how to reach 100% in bit accuracy with 32 bits decoder, it would be thankful if you share with me.

Thank you for your time.

pierrefdz commented 1 year ago

What is your setup? Can you share the logs of your run?

This command should work directly if you have the same setup (and replacing 48 by 32). https://github.com/facebookresearch/stable_signature/tree/main/hidden#example

If you don't have the same setup (for instance number of gpus), you can try reducing the lr to lower values like 1e-3.

DI-LEE commented 1 year ago

I already tried replacing 48 with 32.

Command line is shown below. torchrun --nproc_per_node=8 main.py \ --val_dir path/to/coco/test2014/ --train_dir path/to/coco/train2014/ --output_dir output --eval_freq 5 \ --img_size 256 --num_bits 32 --batch_size 16 --epochs 300 \ --scheduler CosineLRScheduler,lr_min=1e-6,t_initial=300,warmup_lr_init=1e-6,warmup_t=5 --optimizer Lamb,lr=2e-2 \ --p_color_jitter 0.0 --p_blur 0.0 --p_rot 0.0 --p_crop 1.0 --p_res 1.0 --p_jpeg 1.0 \ --scaling_w 0.3 --scale_channels False --attenuation none \ --loss_w_type bce --loss_margin 1

pierrefdz commented 1 year ago

Thank you, I'll try to have a look at it in the following days.

pierrefdz commented 1 year ago

Hi DI, I've run the code on my side and accuracy gets to 1.00, here are the logs: https://drive.google.com/file/d/1qG21QKmVkikQlQ4c2mzqDIsLNe04bQqp/view?usp=drive_link

Could you share with me the logs of your console s.t. I can see if there is an issue? Otherwise there might be a difference either on the dataset, or in the code I run (but I don't think so since the code I've used is the one I pushed here...)

DI-LEE commented 1 year ago

I'm very appreciate it. But I can't access your drive, please give me access right.

pierrefdz commented 1 year ago

can you ask for it? otherwise I need an email to give you access rights

DI-LEE commented 1 year ago

My email address is dongeen1@gmail.com. And my log is below.

log.txt

DI-LEE commented 1 year ago

I'm really sorry to ask, but could you possibly upload the weight file for the 32bit pretrained model to the drive?

pierrefdz commented 1 year ago

Our logs do not share the same lr, so I think the issue comes from here (my guess is that your world size is different in your distributed setup, so the lr is scaled differently - I have not enough experiments on this and did the same scaling as image recognition, which might not be adapted).

You can try removing this line https://github.com/facebookresearch/stable_signature/blob/main/hidden/main.py#L218 and fix the lr to 0.005 instead of 2e-2.

I won't be able to share the networks (it would need to go through internal review), sorry for that.

pierrefdz commented 1 year ago

Closing since no activity, feel free to re-open!