Closed zerlinwang closed 1 year ago
You do not need to modify torchcrepe
. ljspeech
is not a dataset that I implemented. You should check how you setup that dataset and compare it to the examples provided in cargan/data/download.py
.
Have the same issue, looks like librosa.sequence.viterbi
returns np.uint16
values which is unsupported by torch.tensor
Yep, you were both correct. librosa
updated the return type of librosa.sequence.viterbi
from np.int64
to np.uint16
. Fixed in torchcrepe
version 0.0.16.
When I ran the code with my own dataset
python -m cargan.preprocess --dataset ljspeech
An error occuredI guess it is cause by
in torchcrepe\decode.py
The datatype of
bins
is numpy.unint 16. Whether I need to modify the code in torchcrepe ?