huggingface / distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
MIT License
3.32k stars 238 forks source link

[eval] benchmark generation speed #123

Closed eustlb closed 2 days ago

eustlb commented 2 months ago

Add benchmarking of token generation speed. To do so, we generate a fixed number of tokens (using min_new_tokens=max_new_tokens=20), then compute number of generated tokens per sec for the given batch size with dummy batch, repeat 100 times and average.