snakers4 / silero-vad

Silero VAD: pre-trained enterprise-grade Voice Activity Detector
MIT License
4.25k stars 414 forks source link

❓ VAD robustness to noise-only signals in ONNX v3 vs. v4 models #369

Closed cassiotbatista closed 4 months ago

cassiotbatista commented 1 year ago

Hello!

First of all thanks for the VAD model, it is great and really helpful!

I've been doing some experiments with the 16 kHz ONNX models in order to establish a baseline on noisy-speech as well as on non-speech-at-all data. Results on the former for both Ava Speech and LibriParty datasets seem to be in accordance with the quality metrics section of Silero's wiki: v4 is indeed better than v3.

However, for noise-only signals, I've been getting a consistent 2-3x worse result from v4 w.r.t. v3 on ESC-50, UrbanSound8k and FSD50K. This is concerning especially in a always-on scenario (let's say a "wild" one) where the VAD is used a pre-processing front-end to avoid calling a more power-hungry system (which is often the case.)

The following table shows the values for the error rate metric, namely 1-acc, where acc is sklearn's acuracy_score, so lower means better, and the best are highlighted in bold. The numbers being measured are the sigmoid'ed output of both models' forward method (early returned from get_speech_timestamps() utility), with threshold of 0.5 and window size of 1536 samples.

dataset  silero py v3 silero py v4
ava speech 0.2094 0.1545
libriparty 0.1610 0.0576
esc50 0.0407 0.1291
urbansound8k 0.0829 0.2444
fsd50k 0.0640 0.1120

I'm sharing the same uttids of the files I've been using in my experiments. It is not exactly ready to go because I resegmented and dumped some resampled versions of the datasets to disk, but I believe it should be useful and even reproducible if necessary. The format is uttid,bos,eos,label, where BOS and EOS are start and end of speech segments. The value -1 in those fields means there's no speech segment at all 😄

test_files.tar.gz

My environment:

Finally, some questions:

125,144 silero_vad.jit (v3)
53520 encoder
130 decoder
4934 first_layer
66560 lstm
--
90,141 silero_vad.jit (v4)
13680 encoder
66625 decoder
9836 first_layer

Thanks!

snakers4 commented 1 year ago

Hi!

This is definetely an interesting area to cover for v5, we did not really think about it before explicitly! You see, we viewed VAD as speech / noised speech separation from everything else (silence, mild noise, music).

This poses a quesion of separating speech from extremely noisy backgrounds, if I understand correctly. Or when there always is noise and only sometimes speech.

However, for noise-only signals, I've been getting a consistent 2-3x worse result from v4 w.r.t. v3 Could this be due to v4's encoder shrinking w.r.t. v3's (see number of params below from JIT)? Or should this be more of a training-data issue?

We simply did not optimize for this metric, so it is more or less random. But our data construction prefers mild noise and more or less clean speech. In a nutshell, we simply did not optimize for this scenario.

Did you also observe this behaviour on non-speech-only data?

We observed that for very loud noise our VAD behaves not very well.

Does these numbers make sense at all? Am I doing something wrong? If so, I'd appreciate some directions. The numbers being measured are the sigmoid'ed output of both models' forward method (early returned from get_speech_timestamps() utility), with threshold of 0.5 and window size of 1536 samples.

Yes, this makes sense. There are a lot of gimmicks in the get speech timestamps method to make speech detection more robust. We will try to (i) replicate your metrics (ii) see if applying more of the above method will improve the results (iii) adopt the task long-term.

The good news also is that we got a bit of support for our project, so it will enjoy some attention in the near future with regard to customization, generalization and flexibility.

cassiotbatista commented 1 year ago

Hello!

Thank you for your response, @snakers4.

This poses a quesion of separating speech from extremely noisy backgrounds, if I understand correctly. Or when there always is noise and only sometimes speech.

Yes, it is not exactly "detecting speech", but "not triggering on non-speech" instead. What I had in mind is slightly related to the latter. Something like idle periods of a ASR-based dictation application, in which the VAD is always on: to my mind, v4 would trigger - say - twice as often as v3 for background noises (such as a dog barking), which in turn might leave the ASR exposed. For IoT applications, on the other hand, it also means unecessarily calling a power-hungrier system more frequently.

We simply did not optimize for this metric, so it is more or less random.

Ok, got it!

There are a lot of gimmicks in the get speech timestamps method to make speech detection more robust.

In fact, I only used the windowing and forward call from get_speech_segments(), and posed an evaluation after the binarization step only at the model output posteriors, not at the timestamps. Perhaps I should continue the tests at the segment level (e.g., the best model should have the lowest sum of duration of wrongly-detected speech segments for noise-only data), even though I believe v4 would still present a worse behaviour, but maybe not on the same 2-3x proportion.

In any case, while waiting for - and looking forward to - v5, if you would be so nice to report the attempts to replicate such numbers in that table, I'll be happy to hear!

IntendedConsequence commented 1 year ago

This sounds like something related to my experience as well. After using v4 for a while I had to come back to v3. While overall speech detection seemed a bit better in v4 and more precise near word boundaries, it however exhibits a consistent tendency for false positives - long durations of non-speech (1-2 minutes) at the beginning and end of audio files are mistakenly flagged as having speech. For my uses this isn't worth a minor accuracy increase, I can simply increase padding between speech segments.

Now I'm not ruling out a mistake in my code, and I have never tested it formally, but subjectively it seems like it might be related to this issue.

dgoryeo commented 1 year ago

@IntendedConsequence , juts a quick novice question: how does one envokes v3 model? Thanks.

IntendedConsequence commented 12 months ago

@dgoryeo I'm not sure what to tell you. I don't use python for silero v3/v4 anymore, just onnxruntime C api. If I were you I guess I would start by checking out an older repository revision before v4 update? https://github.com/snakers4/silero-vad/tree/v3.1

snakers4 commented 11 months ago

We have finally been able to start work on V5 using this data, among others.

Jellun commented 11 months ago

That’s great news. Great to know that V5 is being worked on.

On 23 Nov 2023, at 5:05 pm, Alexander Veysov @.***> wrote:



We have finally been able to start work on V5 using this data, among others.

— Reply to this email directly, view it on GitHubhttps://github.com/snakers4/silero-vad/issues/369#issuecomment-1823857818, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AE3EASBYEQQ5T5SPAKLVOJTYF3RQVAVCNFSM6AAAAAA4TJFCJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRTHA2TOOBRHA. You are receiving this because you are subscribed to this thread.Message ID: @.***>

snakers4 commented 10 months ago

To be solved with a V5 release.

rishikksh20 commented 6 months ago

@snakers4 Can we fine-tune VAD on our own data ? We have our in house segmented data just like to ask is it possible to fine tune this model or not. I am not able to find any finetuning code in this repo.

snakers4 commented 4 months ago

The new VAD version was released just now - https://github.com/snakers4/silero-vad/issues/2#issuecomment-2195433115

It was designed with this issue in mind and performance on noise-only data was significantly improved - https://github.com/snakers4/silero-vad/wiki/Quality-Metrics

When designing for this task we were using your conclusions and ideas, so many thanks for this ticket

Can you please re-run your and tests and if the issue persists - please open a new issue referring to this one

Many thanks!