bigscience-workshop / data_tooling

Tools for managing datasets for governance and training.
Apache License 2.0
77 stars 49 forks source link

Create dataset african_minds_publisher #241

Open albertvillanova opened 2 years ago

albertvillanova commented 2 years ago
albertvillanova commented 2 years ago

Commented by @StellaAthena: https://github.com/bigscience-workshop/data_tooling/issues/57#issuecomment-971148752

There only appears to be approximately 120 books on this site. While it’s a potentially excellent resource for many applications, it would be infeasible to build a dataset of hundreds of gigabytes out of components this small.

Based on my experience working on the Pile, I would strongly recommend putting on hold any data source that does not have over 5 GB of text, and only accepting ones with less than 10 GB if they’re special. If the goal is to comprise one quarter of the training data for the multilingual model (a number I have in my head but don’t know where it came from), we need at least 250 GB of text. It’s going to be much less work to find 25 10 GB sources than it will be to find 250 1 GB sources.