huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.13k stars 27.05k forks source link

top-p sampling gives different results even after fixing all random seeds #34693

Open jasonppy opened 6 days ago

jasonppy commented 6 days ago

System Info

python: 3.11.9 transformers: 4.43.3 torch: 2.4.0+cu121

Who can help?

@ArthurZucker @gante

Information

Tasks

Reproduction

When using pipeline to generate text using llama3.1 8B, even if I have fixed all the random seeds, with the same prompt, the output will be different everytime. If I set do_sample=False, output will be the same each time.

I understand that do_sample does top_p sampling (or top_k) and therefore there are randomness, but since I have fixed the seed, shouldn't they be the same?

below is the script to reproduce:

import os, random, numpy as np
import transformers
import torch
cache_dir = "some_dir"
model_size="8B"
model_id = f"meta-llama/Meta-Llama-3.1-{model_size}-Instruct"
def seed_everything(seed=1):
    os.environ['PYTHONHASHSEED'] = str(seed)
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True
    # torch.use_deterministic_algorithms(True)
seed_everything(1)
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16, "cache_dir": cache_dir},
    device_map="auto",
    # do_sample=False
)
message = [
    {"role": "system", "content": "You are a helpful assistant that generate random sentences."},
    {"role": "user", "content": "please generate a random sentence."}
]
for _ in range(5):
    outputs = pipeline(
                message,
                max_new_tokens = 2048
            )
    print(outputs[0]["generated_text"][-1]['content'])

Expected behavior

when you fixed the random seed, with the same prompt, each generation should give the same results.

rebel-junamsong commented 6 days ago

According to the page, the default configuration for do_sample is set to True. This means that if you want to achieve consistent results across repeated inferences, you should pass do_sample=False.

https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/generation_config.json

jasonppy commented 6 days ago

Yes, but setting do_sample = False will lead to greedy sampling which I would like to avoid. Theoretically, if all random seeds are fixed, top_p sampling should produce the same samples if the prompt is the same

LysandreJik commented 3 days ago

cc @gante @zucchini-nlp

zucchini-nlp commented 3 days ago

Seems like you need to set the seed before every generation call to get the identical results. The below works for me

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed

model_id = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)

inputs = tokenizer("Hello there", return_tensors="pt").to("cuda")

for i in range(3):
    set_seed(1)
    outputs = model.generate(**inputs, do_sample=True, top_p=100, max_new_tokens=20)
    response = tokenizer.batch_decode(outputs)[0]
    print(response)