google-deepmind / synthid-text

Apache License 2.0
369 stars 27 forks source link

The results of Mean detector are consistent #7

Open wrx1990 opened 3 weeks ago

wrx1990 commented 3 weeks ago

ynthid_text_huggingface_integration.ipynb ,in part of Option 1: Mean detector,i got sample result.From the results, it cannot be distinguished whether there is a watermark or not.

Mean scores for watermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Mean scores for unwatermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Weighted Mean scores for watermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]
Weighted Mean scores for unwatermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]

When I add watermarks to the text or not, I set the same random number conditions and use different models, and the output is the same. From my understanding, I use synthid_mixin.SynthIDGemmaForCausalLM and transformers.GemmaForCausalLM. The results of the model should not be consistent

SynthIDGemmaForCausalLM

# Initialize a SynthID Text-enabled model.
model = synthid_mixin.SynthIDGemmaForCausalLM.from_pretrained(
    MODEL_NAME,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)
# Prepare your inputs in the usual way.
inputs = tokenizer(
    INPUTS,
    return_tensors='pt',
    padding=True,
).to(DEVICE)
# Generate watermarked text.
outputs = model.generate(
    **inputs,
    do_sample=True,
    max_length=1024,
    temperature=TEMPERATURE,
    top_k=TOP_K,
    top_p=TOP_P,
)

GemmaForCausalLM

# Initialize a standard tokenizer from Transformers.
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
# Initialize a GemmaForCausalLM model.
model = transformers. GemmaForCausalLM.from_pretrained(
    MODEL_NAME,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)

inputs = tokenizer(
    INPUTS,
    return_tensors='pt',
    padding=True,
).to(DEVICE)
outputs = model.generate(
    **inputs,
    do_sample=True,
    max_length=1024,
    temperature=TEMPERATURE,
    top_k=TOP_K,
    top_p=TOP_P,
)

What I understand is that after adding the watermark, the output results should be slightly different, but judging from the results, the two are currently completely consistent.Can you provide a sample of the notebook's result set? Thank you.

sumedhghaisas2 commented 3 weeks ago

@wrx1990 Couple of questions Are you using this model

model = synthid_mixin.SynthIDGemmaForCausalLM.from_pretrained(
    MODEL_NAME,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)

as watermarked model? The watermarked model needs to set up differently which you can walk through the colab implementation.

wrx1990 commented 2 weeks ago

@sumedhghaisas2 first. I run colab's code in the local environment, and the results in the Mean detector part are exactly the same. In theory, the results should be different between adding watermarks and not adding watermarks.

Mean scores for watermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Mean scores for unwatermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Weighted Mean scores for watermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]
Weighted Mean scores for unwatermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]

second, In the Generate watermarked output section, enable_watermarking is true or false, the output results are consistent.Can't see any difference in results with watermark。

Can you tell me how to run the program and get different results on the output with and without adding watermark?