Closed Bagfish closed 1 year ago
Hi, You will find in run_example.sh how to run the scoring directly with dscore. I am not sure how pyannote runs the scoring, as it is a different toolkit, but quickly looking at your code, I do not see what forgiveness collar is used for scoring. Perhaps, the default is 0 seconds and, thus, the higher error.
Dear fnlandini: Thanks for replaying my qusetion, maybe it's the forgiveness collar caused the high DER, and I have solved the all problem by coding on the linux system. And i want say thank you for providing this speaker diarization programe, it's really helpful to me .
Glad it was of help. I will close the issue now. Feel free to reopen if you see fit
May be it is also the problem of my pc operation system,so i use the pyannote.mertic to compute the DER ,but i found that the DER is 26.28%, the result in readme, the DER on ES2005a.wav is only 7.06%. I don't know wheather my code is right,could someone figure out the error of my code?
`from future import print_function from future import unicode_literals
from pyannote.core import Segment, Annotation from pyannote.metrics.diarization import DiarizationErrorRate
from dscore.scorelib.rttm import load_rttm
ref_rttm = './example/rttm/ES2005a.rttm'
hpy_rttm = './exp/ES2005a.rttm'
ref_turns, ref_speaker_idsset, = load_rttm(ref_rttm) ref_speaker_ids_lsit = list(ref_speaker_ids_set)
reference = Annotation() for i in range(len(ref_turns)): onset = ref_turns[i].onset offset = ref_turns[i].dur + ref_turns[i].onset spk_id = ref_turns[i].speaker_id reference[Segment(onset, offset)] = spk_id
hpy_turns, hpy_speaker_idsset, = load_rttm(hpy_rttm) hpy_speaker_ids_list = list(hpy_speaker_ids_set) hypothesis = Annotation() for i in range(len(hpy_turns)): onset = hpy_turns[i].onset offset = hpy_turns[i].dur + hpy_turns[i].onset spk_id = hpy_turns[i].speaker_id hypothesis[Segment(onset, offset)] = spk_id
metric = DiarizationErrorRate() metric(reference, hypothesis) report = metric.report(display=True)`