mimbres / neural-audio-fp

https://mimbres.github.io/neural-audio-fp
MIT License
179 stars 25 forks source link

FileNotFoundError: [Errno 2] No such file or directory: './logs/emb/CHECKPOINT_NAME/CHECKPOINT_INDEX/query_shape.npy' #30

Closed kasireddygariDineshKumarReddy closed 2 years ago

kasireddygariDineshKumarReddy commented 2 years ago

While typing this command $python run.py evaluate CHECKPOINT_NAME CHECKPOINT_INDEX

FileNotFoundError: [Errno 2] No such file or directory: './logs/emb/CHECKPOINT_NAME/CHECKPOINT_INDEX/query_shape.npy'

Im getting this error, please help me for search and evaluation process. Thank you!

kasireddygariDineshKumarReddy commented 2 years ago

I have modified the directory './logs/emb/CHECKPOINT_NAME/100/query_shape.npy' to ./logs/emb/CHECKPOINT_NAME/CHECKPOINT_INDEX/query_shape.npy' After execution of command $ python run.py evaluate CHECKPOINT_NAME CHECKPOINT_INDEX I have got this error : RuntimeError: Error in void faiss::Clustering::train_encoded(faiss::Clustering::idx_t, const uint8_t, const faiss::Index, faiss::Index&, const float*) at /root/miniconda3/conda-bld/faiss-pkg_1608526791224/work/faiss/Clustering.cpp:294: Error: 'std::isfinite (x[i])' failed: input contains NaN's or Inf's please look into this ...

mimbres commented 2 years ago

@kasireddygariDineshKumarReddy Here you should replace CHECKPOINT_NAME with your pre-trained checkpoint file name, and also replace CHECKPOINT_INDEX with proper epoch number.

For example, your training would be something like:

python run.py train my_first_experiment 

After training, you should generate fingerprints by:

python run.py generate my_first_experiment

Then you can evaluate the last epoch checkpoint of the trained model:

python run.py evaluate my_first_experiment

Or

# in the case of using the checkpoint of 100th epoch
python run.evaluate my_first_experiment 100
kasireddygariDineshKumarReddy commented 2 years ago

what is meant by pretrained checkpoint filename? I cant understand "Here you should replace CHECKPOINT_NAME with your pre-trained checkpoint file name"

mimbres commented 2 years ago

@kasireddygariDineshKumarReddy You have to train the model first. The saved model parameters (aka. checkpoint) will be located in ./logs/emb/CHECKPOINT_NAME. As in my answer, for example, CHECKPOINT_NAME can be my_first_experiment or anything else that was used in your training. If you're not sure, just run it as it is in the answer above.