Closed SineX8 closed 2 years ago
I can, but I'm not sure if it can help you. Do you want a list of checksums for each file in the directory? Depending on your purpose, the method of creating a checksum will be different... Send me a script and I'll run it.
@mimbres Please provide the checksum of fma_full directory of this project first and that can help me indeed. It will be fine if this directory is correct in my situation. I've checked the numbers of files in your google drive and my disk and they're the same. However the number (about 54M) of segments generated in my evaluation experiment with window length of 1s and hop of 0.5s is different from yours (about 56M) stated in the paper. So I'm wondering if there's something wrong with my dummy dataset.
@SineX8 Got it!
@SineX8 https://drive.google.com/drive/folders/1Bqy5Xsxh4ArmcWsKfQOBpp-sJIVU8UG5?usp=sharing I'm also checking the number of segments generated based on this version of dataset. Because I remember the number was different from I stated in the paper. Sorry for the inconvenience.
@mimbres Thank you for your help! I also want to know whether your results reported in your paper were based on this version of dataset?
@SineX8 No. I found some duplicates between test and dummy.
This is what I see when running 'python run.p generate';
=== Generating fingerprint from 'dummy_db' bsz=125, 53754198 items, d=128 ===
So, 56M is not true. Thanks for finding an erratum.
@mimbres So it's reasonable that the results I obtained using your model was better than which you reported in your paper. Thanks for your help again!
@SineX8 Yes, as reported in #18
or checksum of other files separately so that I can locate the different files...