huggingface / distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
MIT License
3.33k stars 238 forks source link

[Speculative Decoding] How to run speculative decoding for batch_size > 1? #11

Open patrickvonplaten opened 8 months ago

patrickvonplaten commented 8 months ago

Transformers 4.35 only supports speculative decoding for batch size == 1. In order to use speculative decoding for batch size > 1, please make sure to use this branch: https://github.com/huggingface/transformers/pull/26875

To do so, you need to install transformers as follows:

pip install git+https://github.com/huggingface/transformers.git@assistant_decoding_batch

and then you can run:

from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
from datasets import load_dataset

device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

assistant_model_id = "distil-whisper/distil-large-v2"

assistant_model = AutoModelForCausalLM.from_pretrained(
    assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)

model_id = "openai/whisper-large-v2"

model = AutoModelForSpeechSeq2Seq.from_pretrained(
    model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

processor = AutoProcessor.from_pretrained(model_id)

pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    max_new_tokens=128,
    generate_kwargs={"assistant_model": assistant_model},
    torch_dtype=torch_dtype,
    chunk_length_s=15,
    batch_size=4,
    device=device,
)

dataset = load_dataset("distil-whisper/librispeech_long", "default", split="validation")
sample = dataset[0]["audio"]

result = pipe(sample)
print(result["text"])

The PR will be merged to Transformers soon.

Note: Given the "speculative" nature of assistant decoding (a.k.a speculative decoding), it is not recommended to make use of speculative decoding for batch sizes higher than 4 as this might actually lead to the transcription pipeline being slower compared to just using the teacher model. Confer with Table 22 of the paper.

soumendukrg commented 7 months ago

Does the main branch support speculative decoding with chunking and batch size = 1 for long-form transcription?

patrickvonplaten commented 7 months ago

Please read: https://github.com/huggingface/distil-whisper/issues/26#issuecomment-1805643512