huggingface / hub-docs

Docs of the Hugging Face Hub
http://hf.co/docs/hub
Apache License 2.0
288 stars 245 forks source link

Hosted Inference API and config.json using custom inference script #107

Closed kunnark closed 11 months ago

kunnark commented 2 years ago

I've added an audio classification wav2vec model.ckpt to the hub with custom inference script and hyperparams.yaml and would like that a Model Card and Hosted Inference API widget enables to inference that custom script on top of my model.

Hosted Inference API reads config.json and if the classificator is Speechbrain native class some other models are working as charm but when using a custom inference.py and hyperparams.yaml it gives an error.

My model: https://huggingface.co/TalTechNLP/voxlingua107-xls-r-300m-wav2vec A working example on Speechbrain native encoder: https://huggingface.co/speechbrain/urbansound8k_ecapa

My 2 questions: Is it possible to make custom inference? What should be added to config.json to enable custom inference?

osanseviero commented 2 years ago

SpeechBrain inference API is open sourced and implemented in https://github.com/huggingface/api-inference-community/blob/main/docker_images/speechbrain/app/pipelines/audio_classification.py. We use the default hyperarams.yaml file specified in https://github.com/speechbrain/speechbrain/blob/main/speechbrain/pretrained/interfaces.py#L276.

As per your error, you try to use the EncoderWav2vecClassifier which does not exist as far as I know in Speechbrain, which is what leads to the error.