Singingkettle / ChangShuoRadioRecognition

AI Framework of Radio Recognition
Apache License 2.0
32 stars 2 forks source link

About mldnn and cache #1

Open Pragmatism0220 opened 1 year ago

Pragmatism0220 commented 1 year ago

Hello! I attempted to use mldnn on the Deepsig2018 dataset, but I removed parts with SNR greater than 20dB from the JSON file and then ran cache_ amc.py generated a series of .pkl files. Afterwards, I tried to train mldnn, but the loss was displayed as a straight line and there was no change in acc at all; As a comparison, I trained ResCNN in the same way, but it converged correctly. I noticed the use of cache in mldnn, is this the reason for this phenomenon? Did I do something wrong? thank you! image

Singingkettle commented 7 months ago

Hi, sorry about that to replay too late. The cache is just an ugly trick to load all training and validation data in memory before training. For the loss not decrease, it's can be attribute to the init method about cnn and lstm in offical pytorch. I suggest use Xavier in CNN. Recently, I am rewritten all of the code to introduce more AMC algorithms from AMC-Benchmark (https://github.com/Richardzhangxx/AMR-Benchmark). Thank you for your concern