togethercomputer / RedPajama-Data

The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Apache License 2.0
4.43k stars 335 forks source link

regarding to quality classifier #86

Open kimcando opened 7 months ago

kimcando commented 7 months ago

Hi, there

Regarding to quality signal parts, the fasttext based model trained on wiki is only provided in README. Would it be possible to share palm version(books, wiki, owt) too? or share the data proportion recipe to train the classifier? because the classifier behaviors very differently based on the corpus mixture..

Thanks in advance Cheers,

mauriceweber commented 7 months ago

Hi @kimcando -- thats a good point, we will share those classifiers.

The mixtures we used were always 50% high quality, 50% CC docs. What mixtures did you experiment with?

kimcando commented 7 months ago

Before talking about the mixture version, I might edit my question to "Were there any metrics/standards to select the classifier?"

Even for the 'wiki version' classifier, the score distribution is different compared to the provided meta information and the one I trained(my training data points were from a few thousands to a few milion docs, and the ratio of high quality and CC is also 50%, 50%).

Since there might be several points attributing to the score(e.g, word numbers- so that doc numbers, lr and so on), the classifier output score might be very different even under the wiki source.

Therfore, I guess data mixture version score would act differently even severely.. so before going to the mixture parts I wonder 'how' you guys decide to select the classifier!