huggingface / distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
MIT License
3.33k stars 238 forks source link

timestamp for utterance or sentence? #27

Open chenrq2005 opened 7 months ago

chenrq2005 commented 7 months ago

curious if current output format of distil-whisper has timestamp for utterance or sentence. If no, will be considered in the future?

aleksandr-smechov commented 7 months ago

You can enable segment-level timestamps with:

output = pipe(inputs, return_timestamps=True, batch_size=4)

You can add word-level timestamps with the align function from WhisperX, but keep in mind Whisper sometimes can return None for the end timestamp if the segment ends in the middle of a word. In that case just estimate the average duration per character and guess the ending timestamp, before aligning. Something like this.

avg_duration_per_char = total_duration / total_characters # for None timestamp cases

chenrq2005 commented 7 months ago

Thanks so much! It worked, when having return_timestamps=True the timestamp is included in the result['chunks']

sanchit-gandhi commented 7 months ago

Indeed, Distil-Whisper was trained with sentence level timestamps (same training task as Whisper), which you can enable as follows:

import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset

device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

model_id = "distil-whisper/distil-large-v2"

model = AutoModelForSpeechSeq2Seq.from_pretrained(
    model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

processor = AutoProcessor.from_pretrained(model_id)

pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    max_new_tokens=128,
    torch_dtype=torch_dtype,
    device=device,
)

dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]

# result with no timestamps
result = pipe(sample)
print("No timestamps: ", result["text"])

# result with timestamps
result = pipe(sample, return_timestamps=True)
print("With timestamps: ", result["chunks"])

We also support word-level timestamps in 🤗 Transformers using the same dynamic time-warping algorithm as OpenAI's repository. Currently, this only works with batch-size of 1 using the pre-trained Whisper models:

import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset

device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

model_id = "openai/whisper-large-v2"

model = AutoModelForSpeechSeq2Seq.from_pretrained(
    model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

processor = AutoProcessor.from_pretrained(model_id)

pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    max_new_tokens=128,
    torch_dtype=torch_dtype,
    device=device,
)

dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]

result = pipe(sample, return_timestamps="word")
print(result["chunks"])

Note that we haven't found the optimal alignment heads for word-level timestamps for distil-large-v2, so these word-level timestamps aren't available yet. I'll do some analysis to see what the best configuration is and update the model config accordingly!

jasonchanly commented 4 months ago

Hi @sanchit-gandhi can I check if you've found the optimal alignment heads yet for word-level timestamps? Much appreciated!

afsara-ben commented 3 months ago

^^same q