haoxiangsnr / spiking-fullsubnet

Official repository of Spiking-FullSubNet, the Intel N-DNS Challenge Algorithmic Track Winner.
https://haoxiangsnr.github.io/spiking-fullsubnet/
MIT License
44 stars 9 forks source link

Some Reproducibility problems #4

Open Michaeljurado42 opened 5 months ago

Michaeljurado42 commented 5 months ago

Hello. I have found a couple reproducibility issues. Some are easier to fix on others.

In '/recipes/intel_ndns/spiking_fullsubnet/dataloader.py' it is written self.noisy_files = glob.glob(root + "noisy/**.wav"). On my machine, "noisy/*.wav" works instead.

In recipes/intel_ndns/spiking_fullsubnet_freeze_phase/trainer.py the code will not execute because the following import statement is broken

from audiozen.trainer_backup.base_trainer_gan_accelerate_ddp_validate import BaseTrainer

BaseTrainer does not seem to exist in the repository, however Trainer is present.

Also, IntelSISNR is missing as well from the repository.

Lastly, the "recipes/intel_ndns/spiking_fullsubnet/exp" is missing the checkpoint files I believe. So you cannot resume training or test the model.

haoxiangsnr commented 5 months ago

Hi, @Michaeljurado42. Thank you for your attention. Currently, our repo contains two versions of the code:

  1. The frozen version, which serves as a backup for the code used in a previous competition. However, due to a restructuring in the audiozen directory, this version can no longer be directly used for inference. If you need to verify the experimental results from that time, please refer to this specific commit: 38fe020. There you will find everything you need. After switching to this commit, you can place the checkpoints from the model_zoo into the exp directory and use -M test for inference or -M train to retrain the model.

  2. The latest version of the code has undergone some restructuring and optimization to make it more understandable for readers. We've also introduced acceleate to assist with better training practices. I believe you can follow the instructions in the help documentation to run the training code directly. The pre-trained model checkpoints and a more detailed paper will be released by next weekend, so please stay tuned for that.