Open nachoh8 opened 1 year ago
I am having the same issue exactly
Fixed on main in transformers
, can you do:
pip install git+https://github.com/huggingface/transformers.git
to install transformers from main?
I have tried but the error persists.
any update about it? I'm having exctly same error when I try to embed the result with speechbrain audio diarization.
def transform_timestamp_list(input_list, duration):
output_list = []
for item in input_list:
output_item = {
"start": item["timestamp"][0],
"end": item["timestamp"][1] if item["timestamp"][1] != None else duration,
"text": item["text"]
}
output_list.append(output_item)
return output_list
result = pipeline(temp_file.name, task="transcribe", language="pt", return_timestamps=True)
print("transcribe result", result)
segments = transform_timestamp_list(result["chunks"], duration)
# Create embedding
def segment_embedding(segment):
audio = Audio()
start = segment["start"]
# Whisper overshoots the end timestamp in the last segment
end = min(duration, segment["end"])
clip = Segment(start, end)
waveform, sample_rate = audio.crop(temp_file.name, clip)
return embedding_model(waveform[None])
print("starting embedding")
embeddings = np.zeros(shape=(len(segments), 192))
for i, segment in enumerate(segments):
embeddings[i] = segment_embedding(segment)
embeddings = np.nan_to_num(embeddings)
print(f'Embedding shape: {embeddings.shape}')
# Assign speaker label
clustering = AgglomerativeClustering(num_speakers).fit(embeddings)
labels = clustering.labels_
for i in range(len(segments)):
segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1)
# Make output
output = [] # Initialize an empty list for the output
for segment in segments:
# Append the segment to the output list
output.append({
'start': str(convert_time(segment["start"])),
'end': str(convert_time(segment["end"])),
'speaker': segment["speaker"],
'text': segment["text"]
})
print("done with embedding")
time_end = time.time()
time_diff = time_end - time_start
system_info = f"""-----Processing time: {time_diff:.5} seconds-----"""
print(system_info)
# Add this line at the end of the handler function, before the return statement
os.remove(temp_file.name)
return Response(
json = output,
status=200
)
LOGS
01 May, 11:19:36
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
01 May, 11:19:37
starting embedding
01 May, 11:19:38
Embedding shape: (12, 192)
01 May, 11:19:38
-----Processing time: 38.261 seconds-----
01 May, 11:19:38
done with embedding
I am having the same issue exactly too
Hey @nachoh8 - just double checked your code sample, we shouldn't be using stride_length_s=0.0
since this means we have no overlap between chunks (which will severely degrade the performance of your transcription). Could you try leaving this set to None
so that it defaults to chunk_length_s / 6 = 30 / 6 = 5
? This probably explains why only your first batch had timestamps, and not the successive ones.
any update? i'm getting the same error, running on google colab gpu
I am getting the following error when using "openai/whisper-medium" model with timestamp prediction:
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
This error comes from "transformers/models/whisper/tokenization_whisper.py" line 885. The generated tokens do not include any timestamps, except for the first one (0.0).I have tested to use audios of different length (1min to 1h) and different parameters (half-precision, stride) and always the same error occurs. On the other hand, with the base-model and large-v2-model this error does not occur.
Code:
My computer: