google / uis-rnn

This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.
https://arxiv.org/abs/1810.04719
Apache License 2.0
1.55k stars 320 forks source link

about the training loss and the batch size #33

Closed simpleishappy closed 4 years ago

simpleishappy commented 5 years ago

I want to know whether the loss below is normal or not,I set the batchsize=10 then ,no matter how I change dataset, the converge loss is about 900。 image

My background

Have I read the README.md file?

Have I searched for similar questions from closed issues?

Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

Have I tried to find the answers in the reference Speaker Diarization with LSTM?

Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

fedderrico commented 5 years ago

Hello, I've pretty much the same result, except for mine stops at about -600. I used embeddings received from training https://github.com/HarryVolek/PyTorch_Speaker_Verification on TIMIT database. Also, when I use data obtained from mentioned project on uis-rnn demo.py the performance is not pretty good. So my question is same. Seems, like I'm doing something wrong.

And, BTW, thank you very much for sharing this project!

Finished diarization experiment Config: sigma_alpha: 1.0 sigma_beta: 1.0 crp_alpha: 1.0 learning rate: 0.001 regularization: 1e-05 batch size: 10

Performance: averaged accuracy: 0.565950

wq2012 commented 5 years ago

@fedderrico

I have no experience with TIMIT dataset. But I downloaded the single sample from its website: https://catalog.ldc.upenn.edu/LDC93S1

And it doesn't look right...

This dataset doesn't seem to contain speaker labels, not to mention timestamped speaker labels.

Also, the audios are all from single speakers.

How can this dataset be used for diarization?

fedderrico commented 5 years ago

TIMIT dataset was used to train the embedding net. For the uis rnn there's a script to combine several speakers into one utterance

wq2012 commented 5 years ago

@fedderrico

That makes sense. Thanks for the clarification.

However, training uis-rnn on concatenated fake utterances is not the correct way to use it. It's not what uis-rnn is designed for.

Please see the discussions in #43 and #45

fedderrico commented 5 years ago

Wow, turns out i misunderstood it from the beginning. Thank you. I'll try different approach

wq2012 commented 5 years ago

@fedderrico

That said, we never tried to train uis-rnn on fake utterances. So there's no absolute evidence that it will not work. It's just not the way we designed it to be.

Maybe training uis-rnn on a mixture of real dialogues and fake utterances will still help. Will never know until having some experimental results.

BarCodeReader commented 4 years ago

@fedderrico

That said, we never tried to train uis-rnn on fake utterances. So there's no absolute evidence that it will not work. It's just not the way we designed it to be.

Maybe training uis-rnn on a mixture of real dialogues and fake utterances will still help. Will never know until having some experimental results.

Hi,

firstly thanks for your paper and code. interesting concept.

Here I just don't understand: in the paper talking about GE2E Loss, you mention this: "we feed N speaker * M uterances as a batch...For TI-SV training, we divide training utterances into smaller segments, which we refer to as partial utterances. While we don’t require all partial utterances to be of the same length, all partial utterances in the same batch must be of the same length." so means we need to prepare each utterance for each speaker, and we only use part of the information from the utterance, which is only [140,180] frames range.

so after this, if I get you right, you extract features from a real conversation, and feed them into the trained 3-layer LSTM to generate d-vector; and use uis-rnn to do the identification? so means we also generate d-vectors for "silence" or whatever non-related segments for uis-rnn?

could you please explain a bit about the "conversation" used to train the uis-rnn?

wq2012 commented 4 years ago

@BarCodeReader

Here I just don't understand: in the paper talking about GE2E Loss, you mention this: "we feed N speaker * M uterances as a batch...For TI-SV training, we divide training utterances into smaller segments, which we refer to as partial utterances. While we don’t require all partial utterances to be of the same length, all partial utterances in the same batch must be of the same length." so means we need to prepare each utterance for each speaker, and we only use part of the information from the utterance, which is only [140,180] frames range.

Yes, in each batch during training, only part of the utterance is used for speaker recognition. But during inference, embeddings from multiple sliding windows are being aggregated. This runtime inference logic supports online verification without having to wait until the end of utterance.

so after this, if I get you right, you extract features from a real conversation, and feed them into the trained 3-layer LSTM to generate d-vector; and use uis-rnn to do the identification?

Please be aware of the key difference:

so means we also generate d-vectors for "silence" or whatever non-related segments for uis-rnn?

could you please explain a bit about the "conversation" used to train the uis-rnn?

No matter it is speaker verification or diarization, there will be a Voice Activity Detector endpointer that removes silence.