Open timsueberkrueb opened 10 months ago
Hi @timsueberkrueb -- we used the mistral-7B tokenizer and tokenized a subset of 100M documents. We then used these token counts to extrapolate to the full dataset. You can check out the code used to count tokens here: https://github.com/togethercomputer/RedPajama-Data/blob/main/app/src/token_count.py.
In particular, do you have the corresponding numbers in bytes or Unicode codepoints?
What do you mean by this? Are you referring to a specific tokenizer?
Thank you @mauriceweber!
What do you mean by this? Are you referring to a specific tokenizer?
I was wondering about the total amount of text data per language (excluding metadata etc), prior to tokenization.
Hey, thank you for making this data set available to the community. I'm wondering how you estimated the token counts in the table in the README and the blogpost? In particular, do you have the corresponding numbers in bytes or Unicode codepoints? Thanks a lot in advance.