Open jasonppy opened 6 days ago
According to the page, the default configuration for do_sample is set to True. This means that if you want to achieve consistent results across repeated inferences, you should pass do_sample=False.
https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/generation_config.json
Yes, but setting do_sample = False will lead to greedy sampling which I would like to avoid. Theoretically, if all random seeds are fixed, top_p sampling should produce the same samples if the prompt is the same
cc @gante @zucchini-nlp
Seems like you need to set the seed before every generation call to get the identical results. The below works for me
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
model_id = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer("Hello there", return_tensors="pt").to("cuda")
for i in range(3):
set_seed(1)
outputs = model.generate(**inputs, do_sample=True, top_p=100, max_new_tokens=20)
response = tokenizer.batch_decode(outputs)[0]
print(response)
System Info
python: 3.11.9 transformers: 4.43.3 torch: 2.4.0+cu121
Who can help?
@ArthurZucker @gante
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
When using pipeline to generate text using llama3.1 8B, even if I have fixed all the random seeds, with the same prompt, the output will be different everytime. If I set do_sample=False, output will be the same each time.
I understand that do_sample does top_p sampling (or top_k) and therefore there are randomness, but since I have fixed the seed, shouldn't they be the same?
below is the script to reproduce:
Expected behavior
when you fixed the random seed, with the same prompt, each generation should give the same results.