Closed yangshumeiyangshumei closed 2 months ago
Dear Shumei,
Thank you for your interest and your kind words. The files have been published by the task organizers and are a bit hard to find. Here is a link: https://github.com/Kota-Dohi/dcase2022_evaluator
I hope this solves your issues. Otherwise, just let me know.
Best, Kevin
Dear Kevin,
Thank you for your previous assistance. I apologize for reaching out again, but I have encountered another issue while reproducing the results from your work.
After running the experiments on the bearing dataset, I obtained a mean AUC of 70.13 and a mean pAUC of 58.38. For the slider dataset, the results were a mean AUC of 82.96 and a mean pAUC of 63.52. These results seem significantly different from those reported in your paper.
I used the following parameters:
Batch size: 64 Test batch size: 64 Epochs: 100 Aeons: 4 Alpha: 1 Number of subclusters: 16 Ensemble size: 10 Could you please advise if there might be any factors or steps I could be overlooking that might explain these discrepancies? Any guidance you could provide would be greatly appreciated.
Thank you for your time and support.
Best regards, Shumei Yang
Dear Shumei,
I agree. The performances are very different, which should not be the case. But it is hard for me to tell what went wrong. Did you run the code as it is? Did you obtain the results you have mentioned on the development or the evaluation set?
FYI: There are also more recent versions of this ASD system yielding much better performance: https://github.com/wilkinghoff/icassp2023 https://github.com/wilkinghoff/ssl4asd https://github.com/wilkinghoff/AdaProj
Best, Kevin
Dear Kevin,
Thank you for your previous response. I used the development dataset to run the code, and I’ve attached a screenshot of the results I obtained. Additionally, I replaced the main.py file from ICASSP 2023 into the DCASE 2022 codebase and ran it, but unfortunately, the results were still not satisfactory.
I'm not sure where the issue might lie. Could it possibly be related to the installation environment? However, my current environment is capable of running the code successfully.
Do you have any insights or suggestions on what might be causing these discrepancies? I appreciate your time and any guidance you can offer.
Attached are two screenshots: the first one shows the results from running the original DCASE 2022 code, and the second one shows the results after I replaced the main.py file from ICASSP 2023 into the DCASE 2022 code.
Thank you again for your time and assistance.
Best regards, Shumei Yang
Dear Shumei,
Thank you for providing detailed information. Could it be that are you only training using data of a single machine type, e.g. bearing? At least when taking a look at the pictures you provided, this seems to be the case. The system is supposed to be trained using the data of all machine types to have a more challenging task and encode more information about the machine sounds into the embeddings.
Best, Kevin
Dear Kevin,
Thank you very much for your response, but I still have some unresolved questions.
I ran the models on each machine type separately because running all seven datasets together exceeded my GPU memory capacity. I’m wondering if this approach might be the reason for the suboptimal performance I’m observing. The code does print out the AUC and pAUC for both the source and target domains after training each dataset. Could running the datasets separately instead of together be affecting the results?
Thank you again for your time and assistance.
Best regards, Shumei Yang
Dear Shumei,
Yes, this should be the reason because the different machine types affect each other. Your GPU memory should not be a problem, otherwise you can just reduce the batch size. I guess you do not have enough main memory to load the entire dataset at once. You can try to implement some data loader to load the data from you hard drive to the GPU memory, but, depending on your hard drive, this will slow down training significantly.
I hope this helps.
Best, Kevin
Dear Kevin,
Thank you very much! I will try the method you suggested. I sincerely wish you good health, successful research, and that all your wishes come true.
Best , Shumei Yang
Thank you, I wish you the same!
Dear Kevin Wilkinghof,
I hope this message finds you well.
My name is Shumei Yang, and I am currently working on reproducing the experimental results from your paper titled AN OUTLIER EXPOSED ANOMALOUS SOUND DETECTION SYSTEM FOR DOMAIN GENERALIZATION IN MACHINE CONDITION MONITORING. I greatly appreciate the code and research contributions you have shared.
During the reproduction process, I encountered some questions, particularly regarding the generation of .csv files in the evaluation dataset section of the code. In the code, I noticed that files with paths such as './dcase2022_evaluator-main/ground_truth_data/groundtruth' and './dcase2022_evaluator-main/ground_truth_domain/groundtruth' are being used as ground truth label files for calculating AUC and pAUC scores.
I would like to ask for some clarification on how these .csv files were generated or obtained. Were these files generated through a script, or were they manually created during your experiments? If they were generated via a script, would you be able to share the corresponding steps or code used for generating them?
Thank you very much for taking the time to consider my question. As a student about to graduate, I really need your help, and I look forward to your response. Finally, I wish you good health and all the best !
Best regards, Shumei Yang