hitachi-speech / EEND

End-to-End Neural Diarization
MIT License
377 stars 59 forks source link

The result of BLSTM is better than Transformer? #39

Open DiLiangWU opened 2 years ago

DiLiangWU commented 2 years ago

I run eend/egs/mini_librisppech/v1/run.sh and eend/egs/mini_librisppech/v1/local/run_blstm.sh, respectively. The result of run.sh (Transformer) is the same as RESULT.md (Final DER = 29.96%). However, the result of local/run_blstm.sh (BLSTM) is DER =17.02%, better than Transformer? Furthermore, I use CALLHOME corpora, the simulated mixtures are generated by SRE2008 and SWBD Cellular 1, the results also show that BLSTM is better than Transformer.

It is worth noting that a warning appears when i ran local/run_blstm.sh, as follows.

/workspace/EEND/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/iterators/multiprocess_iterator.py:629: UserWarning: Shared memory size is too small. Please set shared_mem option for MultiprocessIterator. Expect shared memory size: 4389780 bytes. Actual shared memory size: 4171864 bytes.

I have no idea about this. And I'd like to know the results you got when running mini_librisppech/v1/local/run_blstm.sh.

Thank you very much.