sanchit-gandhi / whisper-jax

JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.
Apache License 2.0
4.43k stars 386 forks source link

OpenAI Whisper medium-model error while processing timestamps #51

Open nachoh8 opened 1 year ago

nachoh8 commented 1 year ago

I am getting the following error when using "openai/whisper-medium" model with timestamp prediction: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used? This error comes from "transformers/models/whisper/tokenization_whisper.py" line 885. The generated tokens do not include any timestamps, except for the first one (0.0).

I have tested to use audios of different length (1min to 1h) and different parameters (half-precision, stride) and always the same error occurs. On the other hand, with the base-model and large-v2-model this error does not occur.

Code:

model = "openai/whisper-medium"
whisper = FlaxWhisperPipline(model, dtype=jnp.float32)
res: dict = whisper(audio_file, stride_length_s=0.0, language="es", return_timestamps=True)

My computer:

luisroque commented 1 year ago

I am having the same issue exactly

sanchit-gandhi commented 1 year ago

Fixed on main in transformers, can you do:

pip install git+https://github.com/huggingface/transformers.git

to install transformers from main?

nachoh8 commented 1 year ago

I have tried but the error persists.

diegofer25 commented 1 year ago

any update about it? I'm having exctly same error when I try to embed the result with speechbrain audio diarization.

def transform_timestamp_list(input_list, duration):
    output_list = []

    for item in input_list:
        output_item = {
            "start": item["timestamp"][0],
            "end": item["timestamp"][1] if item["timestamp"][1] != None else duration,
            "text": item["text"]
        }
        output_list.append(output_item)

    return output_list

        result = pipeline(temp_file.name, task="transcribe", language="pt", return_timestamps=True)
        print("transcribe result", result)

        segments = transform_timestamp_list(result["chunks"], duration)

        # Create embedding
        def segment_embedding(segment):
            audio = Audio()
            start = segment["start"]
            # Whisper overshoots the end timestamp in the last segment
            end = min(duration, segment["end"])
            clip = Segment(start, end)
            waveform, sample_rate = audio.crop(temp_file.name, clip)
            return embedding_model(waveform[None])

        print("starting embedding")
        embeddings = np.zeros(shape=(len(segments), 192))
        for i, segment in enumerate(segments):
            embeddings[i] = segment_embedding(segment)
        embeddings = np.nan_to_num(embeddings)
        print(f'Embedding shape: {embeddings.shape}')

        # Assign speaker label
        clustering = AgglomerativeClustering(num_speakers).fit(embeddings)
        labels = clustering.labels_
        for i in range(len(segments)):
            segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1)

        # Make output
        output = []  # Initialize an empty list for the output
        for segment in segments:
            # Append the segment to the output list
            output.append({
                'start': str(convert_time(segment["start"])),
                'end': str(convert_time(segment["end"])),
                'speaker': segment["speaker"],
                'text': segment["text"]
            })

        print("done with embedding")
        time_end = time.time()
        time_diff = time_end - time_start

        system_info = f"""-----Processing time: {time_diff:.5} seconds-----"""
        print(system_info)

        # Add this line at the end of the handler function, before the return statement
        os.remove(temp_file.name)

        return Response(
            json = output,
            status=200
        )

LOGS

01 May, 11:19:36
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
01 May, 11:19:37
starting embedding
01 May, 11:19:38
Embedding shape: (12, 192)
01 May, 11:19:38
-----Processing time: 38.261 seconds-----
01 May, 11:19:38
done with embedding
jkf87 commented 1 year ago

I am having the same issue exactly too

sanchit-gandhi commented 1 year ago

Hey @nachoh8 - just double checked your code sample, we shouldn't be using stride_length_s=0.0 since this means we have no overlap between chunks (which will severely degrade the performance of your transcription). Could you try leaving this set to None so that it defaults to chunk_length_s / 6 = 30 / 6 = 5? This probably explains why only your first batch had timestamps, and not the successive ones.

phineas-pta commented 1 year ago

any update? i'm getting the same error, running on google colab gpu