Open shirounanashi opened 8 months ago
I also had this error, in my case it was because my GPU ran out of memory. I don't know if this is the case for you too, but I would recommend checking your GPU memory while doing the inference.
@alcoftTAO I don't think that's the problem here since I was using a T4
I think an error is occurring during inference as it is returning None
to scipy.
If you are using the API, make sure to put this:
curl -X 'POST' \
'http://127.0.0.1:8000/inference' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'modelpath={model.pth}' \
-F 'input={input audio path}' \
-o {output audio path}
Where you replace {model.pth}
with the .pth file of your model, {input audio path}
with the audio file you want to make the inference to and {output audio path}
with the file in which you want to save it.
I don't know if it will work as I have never used the API. If it doesn't work, try creating the output audio first and try again.
I don't think this is the problem since I'm using CLI, not API
@shirounanashi could you tell me your environment?
@Tps-F I was trying to use it on Kaggle
When I try to make an inference by outputting the file name, I get this error.
And when I put "-o" as a folder, it gives an error saying it is a directory Sorry for the bad English, I used Google Translate