Open Amforever opened 2 years ago
Hi,
Before I was not aware that CCE PyTorch includes logsoftmax() function. So I wrote externally.
I also checked by removing log-softmax from model definition and that's gave around similar performance (~10% EER) and 2021 ASVspoof baseline performance was 9.50% EER. I assume this difference might be due to cuda version.
Thanks
Thanks, I had test RawNet() on ASVspoof2019 LA dataset and achieved around 4.30% EER on Eval set. In addiiton, I find the learning rate (lr) has great impact on the EER, when i set the lr=0.005, the RawNet was not convergenced and got the 100% EER. Actually, I still doubt the principle of end-to-end anti-spoofing systems directly using WAV, even though the RawNet can get pretty results, thanks.
Hi,
ASVspoof 2021 challenge keys (ground truths) are now available on asvspoof website.
You can download ASVspoof 2021 evaluation ground truths directly from asvspoof website : https://www.asvspoof.org/
Thanks,
Regards, Hemlata
On Tue, Mar 8, 2022 at 5:09 PM undefined_XD @.***> wrote:
Thanks, I had test RawNet() on ASVspoof2019 LA dataset and achieved around 4.30% EER on Eval set. In addiiton, I find the learning rate (lr) has great impact on the EER, when i set the lr=0.005, the RawNet was not convergenced and got the 100% EER. Actually, I still doubt the principle of end-to-end anti-spoofing systems directly using WAV, even though the RawNet can get pretty results, thanks.
I'm trying to test my model on ASVspoof 2019 LA, but I have no idea that where i can get the ground truth of eval set. Can you tell me how i can get it ?
THANKS !
— Reply to this email directly, view it on GitHub https://github.com/asvspoof-challenge/2021/issues/4#issuecomment-1061944649, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKPZY6532CIICQNPXAUYLZDU653TFANCNFSM5HDTLPWA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you commented.Message ID: @.***>
As i read from "the read document of pytorch", the "CrossEntropyLoss() function" is criterion combintion of nn.LogSoftmax() and nn.NLLLoss(). Why there is still a nn.LogSoftmax() layer in the last layer of RawNet() and if i remove the nn.LogSoftmax() there would be Bad results.