togethercomputer / RedPajama-Data

The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Apache License 2.0
4.53k stars 346 forks source link

Inquiry About Character-Level Basis of Duplication Calculation #116

Open luc1fer3 opened 2 months ago

luc1fer3 commented 2 months ago

Hi, thank you for your release. I've been reviewing the method we use to calculate the repetition score for identifying duplicate content in documents, specifically the segment where we compute this score based on the number of characters within duplicate n-grams:

https://github.com/togethercomputer/RedPajama-Data/blob/bb594b01a92b7e6fcf70cf3b6659851ce17edcce/app/src/core/quality_signals/repetitions.py#L136-L138

I noticed that we're using character counts (word_lengths) to determine the extent of duplication. This approach focuses on the granularity of characters rather than whole words. Could you help me understand the rationale behind choosing character-level analysis for this metric instead of basing our calculations directly on word counts? Are there specific advantages or scenarios where character-level detail provides better insights into data quality or model training effectiveness that might not be as apparent with word-level analysis?

Looking forward to your insights.

mauriceweber commented 2 months ago

Hi @luc1fer3 and thanks for your question. This repetition scores measure the ratio between the number characters that appear in duplicated n-grams, and the total number of characters in the document. As such, this score contains both information at a character level, and at a (word-)ngram level. Choosing to compute character-based metrics essentially means you normalize at a higher level of granularity, taking into account more information than when using the number of words (eg, think of long words which are repeated often). It's possible though that a combination with word-level statistics also gives you a good indicator.

Hope this helps!