huggingface / cosmopedia

Apache License 2.0
441 stars 44 forks source link

Cosmopedia

Description of Image

Image generated by DALL-E, the prompt was generated by Mixtral-8x7B-Instruct-v0.1.

[🤗 Cosmopedia dataset] | [🤖 1B-LLM trained on Cosmopedia] | [📰 Blog post]

blog post:


Description

Here you can find the code used for creating Cosmopedia, a dataset of synthetic textbooks, blogposts, stories, posts and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1. It contains over 30 million files and 25 billion tokens, making it the largest open synthetic dataset to date.

Cosmopedia covers a variety of topics; we tried to map world knowledge present in Web datasets like RefinedWeb and RedPajama, and generate synthetic content that covers them. This is the v0.1 of Cosmopedia, with ample room for improvement and topics to be more comprehensively covered. We hope this dataset will help the community's research efforts in the increasingly intriguing domain of synthetic data.

clusters

The clusters of Cosmopedia.

You can also find a files frequency plot of single topic clusters in plots/topic_distpng.png.

Code structure