This repository is created for Audio Dataset Project, an audio dataset collection initiative announced by LAION. These datasets, each containing enormous amount of audio-text pairs, will be eventually processed and used for training CLAP (Contrastive language-Audio Pretraining) model and other models.
Here is an explicative video introducing you to the project.
Since Audio Dataset is an open source project belongs to LAION, we have a team of open source contributors. They are, along with LAION members, a three-people researchers group including Yusong Wu, Ke Chen and Tianyu Zhang from Mila and UCSD, intern Marianna Nezhurina, previous intern Yuchen Hui, as well as many enthusiastic contributors from all over the world, such as @PiEquals4#1909 in Discord server.
environment.txt
. Please Note that environment.txt may be an non-exhaustive list. There is also a list with redundant packages environment.yml(i.e. superclass $\supset$ of the exhaustive list), and you can use command conda env create --name envname --file=environment.yml
to create the environment and conda activate envname
for using it. We have created a github project page to keep track of the progress of data collection and data processing. Here follows some descriptions for each board of the project:
There are mainly two ways to contribute to our audio dataset project.
Collection of scattered audio sources by means of web scraping technique (and then convert them to webdatset format, i.e. the second point below).
Example: crawling word-pronunciation pair from Cambridge Dictionary, or scrape videos from youtube, extract the sound and associate then with the title.
Please join us in Discord if you want to know which scattered audio sources we currently focus on, or if you have suggestion about what we should scrape next.
Process of curated datasets, i.e. convert them to webdataset format according to the pipeline
Example: Clotho is an curated audio dataset having its own format, thought we ought to convert it to webdataset format with aid of
data_preprocess/preprocess_clotho.py
andutils/make_tars.py
. For more processing details please read the pipeline part.
For this type of contribution, it is suggested to view the datasets in the Todo board in the github project page and join us in Discord server. Please contact Marianna Nezhurina(marianna13#7139) in CLAP channel after you have chosen one from Todo board to process, so that we can keep track of the progress and avoid the case where many people work simultaneously on one dataset.
Ideally, in both cases mentioned above, we hope receive from you the webdataset format dataset. When you’ve packed up your dataset into webdataset format, upload it to our AWS S3 bucket: aws s3 cp your/webdataset/ s3://s-laion-audio/webdataset_tar/your webdataset/
and contact Marianna Nezhurina(marianna13#7139) so that she could move the dataset to the review board. (If possible, please also add the processed (not yet packed up) dataset to S3://s-laion-audio/processed_dataset
).
When it comes to AWS s3 accessibility problem, please see the LAION cluster part in contact entry above, because AWS s3 are accessible if visited from the LAION new cluster.
Nevertheless, for the scrapped dataset, we also receive a CSV file of which the structure is:
url_link_to_the_audio_allowing_us_to_download , text
i.e. each line is an audio_url-text pair, by which we can write a batch file to handle it easily.
Last updated on July 14 0:57 EST, 2022 Last updated on September 5th 11:00 EST, 2022 (Marianna Nezhurina will take over the intern's work of Yuchen Hui) Last updated on November 8th 18:55 EST, 2022 (Release of LAION-Audio-630K dataset)