If there is no speaking detected in the audio file, Transcribe returns a JSON document but without the expected speech segments, which currently results in an exception. So we need to check for that instead of simply assuming it will always be there. Error message pointing to at least one location in code where this is an issue:
[ERROR] TypeError: 'NoneType' object is not subscriptable
Traceback (most recent call last):
File "/function/transcribe_to_docx.py", line 756, in lambda_handler
speech_segments = create_turn_by_turn_segments(transcript, isSpeakerMode = True)
File "/function/transcribe_to_docx.py", line 538, in create_turn_by_turn_segments
for segment in data["results"]["speaker_labels"]["segments"]:
If there is no speaking detected in the audio file, Transcribe returns a JSON document but without the expected speech segments, which currently results in an exception. So we need to check for that instead of simply assuming it will always be there. Error message pointing to at least one location in code where this is an issue: