Closed asusdisciple closed 1 day ago
@BBC-Esq and I are currently working on this, check #974
I'll send an invite to the repo if he wants to help out or just kibitz. Like @MahmoudAshraf97 I've been inundated with other stuff but do plan to get back to the benchmarking in the very near future.
I want to benchmark faster-whisper and some pipeline whisper implementations of whisper in huggingface. For the sake of fairness I would like to parametrize the models as equally as possible.
In HF you have different generation possibilities which are:
How would I for example reproduce greedy decoding in faster-whisper? Is there a do_sample parameter? Should I set
best_of = 1
andbeam_size = 1
? Also in case I setdo_sample = True
in HF would that be equal to settingbest_of = 5
? Maybe you can share some insights with me, best case I want to reproduce all of the above strategies.Best regards