A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
When I was double-checking the exact decoded tokens of these two IDs, I found that:
While tokenizer.decode([5852]) returns 'True' correctly, tokenizer.convert_tokens_to_ids('True') returns a different ID number (5574) rather than 5852. In other words, it seems that both two IDs 5574 and 5852 represent the same token. And the same strange phenomenon happens to 'False' as well.
Reproduce
Please see the code below to reproduce this:
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
# Load the tokenizer and model
model_name = 'meta-llama/Llama-2-7b-chat-hf'
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Get the indices for 'True' and 'False'
True_token_index = tokenizer.convert_tokens_to_ids('True')
False_token_index = tokenizer.convert_tokens_to_ids('False')
print(f"Index of 'True': {True_token_index}")
print(f"Index of 'False': {False_token_index}")
true_token = tokenizer.decode([5852])
false_token = tokenizer.decode([7700])
print(f"Token at index 5852: {true_token}")
print(f"Token at index 7700: {false_token}")
Output:
Index of 'True': 5574
Index of 'False': 8824
Token at index 5852: True
Token at index 7700: False
Question:
Is there any explanation for this? And are there any advantages to manually setting the token ID as 5852 and 7700, rather than using tokenizer.convert_tokens_to_ids('True')?
Hi! Big thanks for the wonderful work. There is a weird thing I found about the True and False token IDs and would like to make an enquiry.
Description
I saw that the
factscorer.py
code uses predefined token IDs to access token logits. i.e.,:When I was double-checking the exact decoded tokens of these two IDs, I found that: While
tokenizer.decode([5852])
returns 'True' correctly,tokenizer.convert_tokens_to_ids('True')
returns a different ID number (5574) rather than 5852. In other words, it seems that both two IDs 5574 and 5852 represent the same token. And the same strange phenomenon happens to 'False' as well.Reproduce
Please see the code below to reproduce this:
Output:
Question:
Is there any explanation for this? And are there any advantages to manually setting the token ID as 5852 and 7700, rather than using
tokenizer.convert_tokens_to_ids('True')
?