Open RawadMelhem opened 3 years ago
Hi, I am facing a similar problem. Due to trimming of silence, the mapping of the output prediction to the original audio sample becomes difficult because of the difference in time mapping. Is it possible to include silence in the final output? This will make the process of mapping easier.
Hi @steffi25 and @RawadMelhem , did you manage to find fix to this issue? I too am facing the same issue.
It is solved. You can take the object "wav" before diarization process, which is the audio file without silence. wav object is the output of wav_preprocessing method.
Hi, thank you very much, the project is very interesting. I have a problem, I've got always diarized speech trimmed from the end, sometimes it is cut more than 10 seconds. for example: total duration of audio file = 27s, the result ends at 16s total duration of audio file = 45s, the result ends at 30s I am using sample rate 16k. I appreciate any help. Rawad