asteroid-team / asteroid

The PyTorch-based audio source separation toolkit for researchers
https://asteroid-team.github.io/
MIT License
2.27k stars 423 forks source link
audio-separation deep-learning pretrained-models pytorch source-separation speech-enhancement speech-separation
**The PyTorch-based audio source separation toolkit for researchers.** [![PyPI Status](https://badge.fury.io/py/asteroid.svg)](https://badge.fury.io/py/asteroid) [![Build Status](https://github.com/asteroid-team/asteroid/workflows/CI/badge.svg)](https://github.com/asteroid-team/asteroid/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush) [![codecov][codecov-badge]][codecov] [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Documentation Status](https://img.shields.io/badge/docs-0.7.0-blue)](https://asteroid.readthedocs.io/en/v0.7.0/) [![Latest Docs Status](https://github.com/asteroid-team/asteroid/workflows/Latest%20docs/badge.svg)](https://asteroid-team.github.io/asteroid/) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/asteroid-team/asteroid/pulls) [![Python Versions](https://img.shields.io/pypi/pyversions/asteroid.svg)](https://pypi.org/project/asteroid/) [![PyPI Status](https://pepy.tech/badge/asteroid)](https://pepy.tech/project/asteroid) [![Slack][slack-badge]][slack-invite]

Asteroid is a Pytorch-based audio source separation toolkit that enables fast experimentation on common datasets. It comes with a source code that supports a large range of datasets and architectures, and a set of recipes to reproduce some important papers.

You use Asteroid or you want to?

Please, if you have found a bug, open an issue, if you solved it, open a pull request! Same goes for new features, tell us what you want or help us building it! Don't hesitate to join the slack and ask questions / suggest new features there as well! Asteroid is intended to be a community-based project so hop on and help us!

Contents

Installation

(↑up to contents) To install Asteroid, clone the repo and install it using conda, pip or python :

# First clone and enter the repo
git clone https://github.com/asteroid-team/asteroid
cd asteroid

Tutorials

(↑up to contents) Here is a list of notebooks showing example usage of Asteroid's features.

Running a recipe

(↑up to contents) Running the recipes requires additional packages in most cases, we recommend running :

# from asteroid/
pip install -r requirements.txt

Then choose the recipe you want to run and run it!

cd egs/wham/ConvTasNet
. ./run.sh

More information in egs/README.md.

Available recipes

(↑up to contents)

Supported datasets

(↑up to contents)

Pretrained models

(↑up to contents) See here

Contributing

(↑up to contents) We are always looking to expand our coverage of the source separation and speech enhancement research, the following is a list of things we're missing. You want to contribute? This is a great place to start!

Don't forget to read our contributing guidelines.

You can also open an issue or make a PR to add something we missed in this list.

TensorBoard visualization

The default logger is TensorBoard in all the recipes. From the recipe folder, you can run the following to visualize the logs of all your runs. You can also compare different systems on the same dataset by running a similar command from the dataset directiories.

# Launch tensorboard (default port is 6006)
tensorboard --logdir exp/ --port tf_port

If your launching tensorboard remotely, you should open an ssh tunnel

# Open port-forwarding connection. Add -Nf option not to open remote.
ssh -L local_port:localhost:tf_port user@ip

Then open http://localhost:local_port/. If both ports are the same, you can click on the tensorboard URL given on the remote, it's just more practical.

Guiding principles

(↑up to contents)

Citing Asteroid

(↑up to contents) If you loved using Asteroid and you want to cite us, use this :

@inproceedings{Pariente2020Asteroid,
    title={Asteroid: the {PyTorch}-based audio source separation toolkit for researchers},
    author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and
            Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and
            Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge
            and Emmanuel Vincent},
    year={2020},
    booktitle={Proc. Interspeech},
}