m-bain / whisperX

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
BSD 2-Clause "Simplified" License
12.61k stars 1.33k forks source link

How to load model? #886

Closed salekeennayeem closed 1 month ago

salekeennayeem commented 2 months ago

Hey in v3.1.1 I see no option to load the model locally! Do I need to download the model each time?

def load_model(whisper_arch, device, compute_type="float16", asr_options=None, language=None,
               vad_options=None, model=None, task="transcribe"):
    '''Load a Whisper model for inference.
    Args:
        whisper_arch: str - The name of the Whisper model to load.
        device: str - The device to load the model on.
        compute_type: str - The compute type to use for the model.
        options: dict - A dictionary of options to use for the model.
        language: str - The language of the model. (use English for now)
    Returns:
        A Whisper pipeline.
    '''    

    if whisper_arch.endswith(".en"):
        language = "en"

    model = WhisperModel(whisper_arch, device=device, compute_type=compute_type)
brandonbondig commented 1 month ago

Did you find out?