Open kira-lin opened 6 months ago
After #209 closes, consider to calculate correct input length for every prompt in MultiplePromptInput, as well as generated tokens.
torch.sum(input_ids == tokenizer.pad_token_id, dim=0).tolist()
Doing so can remove pad tokens when calculating benchmark results.
After #209 closes, consider to calculate correct input length for every prompt in MultiplePromptInput, as well as generated tokens.
Doing so can remove pad tokens when calculating benchmark results.