An implementation of music separation model by Luo et.al.
Prepare .wav files to separate.
Install library
pip install git+https://github.com/leichtrhino/ChimeraNet
Download pretrained model.
Download sample script.
Run script
python chimeranet-separate.py -i ${input_dir}/*.wav \
-m model.hdf5 \
--replace-top-directory ${output_dir}
Output in nutshell
${input_file}_{embd,mask}_ch[12].wav
.embd
and mask
indicates that it was inferred from deep clustering and mask respectively.ch1
and ch2
are voice and music channel respectively.See Example section on ChimeraNet documentation.
pip install git+https://github.com/leichtrhino/ChimeraNet
or
any python package installer.
(Currently, ChimeraNet
is not in PyPI.)tensorflow
if unsure.