togethercomputer / RedPajama-Data

The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Apache License 2.0
4.53k stars 346 forks source link

Token counts #88

Open timsueberkrueb opened 10 months ago

timsueberkrueb commented 10 months ago

Hey, thank you for making this data set available to the community. I'm wondering how you estimated the token counts in the table in the README and the blogpost? In particular, do you have the corresponding numbers in bytes or Unicode codepoints? Thanks a lot in advance.

mauriceweber commented 10 months ago

Hi @timsueberkrueb -- we used the mistral-7B tokenizer and tokenized a subset of 100M documents. We then used these token counts to extrapolate to the full dataset. You can check out the code used to count tokens here: https://github.com/togethercomputer/RedPajama-Data/blob/main/app/src/token_count.py.

In particular, do you have the corresponding numbers in bytes or Unicode codepoints?

What do you mean by this? Are you referring to a specific tokenizer?

timsueberkrueb commented 10 months ago

Thank you @mauriceweber!

What do you mean by this? Are you referring to a specific tokenizer?

I was wondering about the total amount of text data per language (excluding metadata etc), prior to tokenization.