jinsunij / RiffRefine

0 stars 0 forks source link

RiffRefine: Leverage your Music!

This project uses metadata and features of audio data to predict the number of listens for a new song. The model is developed so it integrates to an app that allows artists to receive feedback on their song. This should enable upcoming artists, with small budget and lack of equipment, to get a first feedback on their newest creative work and hint at how they can reach a greater audience.

Setup

We started this project from scratch and it requires several dependencies, which we have listed in requirements.txt files. To ensure that the project runs correctly, we recommend that you set up a virtual environment before installing the dependencies.

To create a virtual environment, follow these steps:

  1. Install pyenv to manage your Python versions. You can follow the instructions for your specific operating system on the pyenv GitHub page. In the terminal or command prompt, navigate to the project directory and run the command pyenv local 3.11.3. This sets the Python version for the current directory to 3.11.3.

  2. Create a new virtual environment by running the command python -m venv .venv. Activate the virtual environment by running the command source .venv/bin/activate on Linux/Mac or .\\venv\\Scripts\\activate on Windows.

  3. Upgrade pip to the latest version by running the command pip install --upgrade pip.

Installation

To install and run this project, follow these steps:

  1. Clone this repository to your local machine.

  2. Install the required libraries by running pip install -r requirements.txt in your terminal.

  3. Download the datasets fma_metadata.zip (342 MiB) and fma_small.zip (7.2 GiB) from FMA: A Dataset For Music Analysis.

  4. Unzip both datasets into the data folder.

Notebooks Overview - WIP

Conclusion

The prediction model developed in this project has the potential to open up untouched area of 'Fame Prediction' within the field of Music Information Retrieval (MIR). The integration of this model into an app has the potential to provide a tool that can be used by low-budget musicians as a feedback loop.

We acknowledge that music as a form of art and culture is very complex. Thus, a next step for the project is to conduct the analysis on recently released tracks, as trends play a major role in the music industry. A deeper dive into understanding the audio features to be used in machine learning algorithms can also be interesting. Feeding alternative audio representations of a track into a convolutional neural networks, such as spectrograms, might lead to more importance to the audio features in predicting the amount of listens for a new song.

MIR has received growing interest in the past years. With significant technical improvements in audio signal processing (e.g. Librosa) and machine learning, the potential is large, making this an appealing and promising project for the advancement of the whole field.

Contributing

We welcome contributions from everyone. Here are some ways you can contribute:

Thank you for your interest in contributing to our project!