yinruiqing / pyannote-whisper

457 stars 72 forks source link

invalid str2bool value #1

Open ItsMe-TJ opened 1 year ago

ItsMe-TJ commented 1 year ago

I'm getting "transcribe.py: error: argument --diarization: invalid str2bool value: 'true'".

How do I fix this?

Oh and I have a question, how would I go about splitting the audio into individual files by speaker? maybe a feature you could add?

Thanks!

yinruiqing commented 1 year ago

I'm getting "transcribe.py: error: argument --diarization: invalid str2bool value: 'true'".

How do I fix this?

true -> True. I fixed readme

Oh and I have a question, how would I go about splitting the audio into individual files by speaker? maybe a feature you could add?

See Readme

Thanks!

ItsMe-TJ commented 1 year ago

Thank you so much! It would be cool If you could add a function to remove overlapped speech, the reason I'm asking is that I'm making datasets for training Text to Speech models. Using audio from podcasts etc where two people are talking, I know Pyannote has this function, but I'm honestly not savvy enough to implement it myself. Removing people talking over each other first before diarization could help in making really clean datasets. Either way, thank you so much.

yinruiqing commented 1 year ago

Thank you so much! It would be cool If you could add a function to remove overlapped speech

I will do it this weekend.

ItsMe-TJ commented 1 year ago

Thank you so much! It would be cool If you could add a function to remove overlapped speech

I will do it this weekend.

Thank you!

yinruiqing commented 1 year ago

@ItsMe-TJ I what you want is the following function:

def remove_overlap_part(ann):
    overlap = to_overlap(ann).get_timeline()
    if len(overlap) == 0:
        return ann
    else:
        overlap_start = overlap[0].start
        overlap_end = overlap[-1].end
        ann_start = ann.get_timeline()[0].start
        ann_end = ann.get_timeline()[-1].end
        non_overlap = overlap.gaps()
        if overlap_start > ann_start:
            non_overlap.add(Segment(ann_start, overlap_start))
        if ann_end > overlap_end:
            non_overlap.add(Segment(overlap_end, ann_end))            
        return ann.crop(non_overlap)

By the way, I also have an open-source tts project deepaudio-tts. I need someone else to work together with me on that. Are you interested in it?

ItsMe-TJ commented 1 year ago

I am always interested in TTS stuff! So absolutely! Though when you say "Work together with me" I don't know how to code or anything lol, but I'm happy to help in any way I can!

ItsMe-TJ commented 1 year ago

Okay so I have an idea, and since you're doing the whole TTS thing you might benefit from it as well!

So let's say I have a podcast episode where 2 people are talking, I want to specify the number of speakers, and then remove overlapped speech, output the non-overlapped speech into a new audio file. Then take that audio file and do diarization, splitting the audio by speaker and outputting those new audio files into folders. Speaker 1, Speaker 2 etc. This would in my mind, make clean datasets for training TTS.

So that's my idea, hopefully you understand why I asked about removing overlapped speech and splitting the audio.

idk If you're interested in taking a crack at it! As I said, I don't code - though I really REALLY wish I knew how, I've tried learning many times but I think I'm just too dumb lol

yinruiqing commented 1 year ago

Good idea. I will implement it soon and put it to deepaudio-tts.

ItsMe-TJ commented 1 year ago

Good idea. I will implement it soon and put it to deepaudio-tts.

Great! It will be really helpful, and I can't wait it to try it!

ItsMe-TJ commented 1 year ago

How's it going? I know It's probably not easy, just curious on the progress!

yinruiqing commented 1 year ago

How's it going? I know It's probably not easy, just curious on the progress!

I gonna finish in one week. I need someone familiar with frontend skills to help me with the interaction.

yinruiqing commented 1 year ago

@ItsMe-TJ You can use the following code.

import numpy as np
from scipy.io.wavfile import write
def save_wave_from_numpy(data, f, rate=16000):
    scaled = np.int16(data / np.max(np.abs(data)) * 32767)
    write(f, rate, scaled)

import whisper
from pyannote.audio import Pipeline
from pyannote.audio import Audio
from pyannote_whisper.utils import diarize_text
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization",
                                    use_auth_token="your/token")
model = whisper.load_model("tiny.en")
diarization_result = pipeline("data/afjiv.wav")

from pyannote.audio import Audio
audio = Audio(sample_rate=16000, mono=True)
audio_file = "data/afjiv.wav"
result_without_overlap = remove_overlap_part(diarization_result)
for segment, _, speaker in diarization_result.itertracks(yield_label=True):
    waveform, sample_rate = audio.crop(audio_file, segment)
    filename = f"{segment.start:.2f}s_{segment.end:.2f}s_{speaker}.wav"
    save_wave_from_numpy(waveform.squeeze().numpy(), filename)
    text = model.transcribe(waveform.squeeze().numpy())["text"]
    print(f"{segment.start:.2f}s {segment.end:.2f}s {speaker}: {text}")
yinruiqing commented 1 year ago

@ItsMe-TJ I am working on audio-annotation. It will provide an easy way to export audio segments for a single speaker.