Closed rosana-sc7 closed 1 year ago
Hi again!
I also have one more question. I have just seen your pdf on this implementation. On the 44th slide there is a picture of a file named "scores.txt". I'm having trouble finding where this file is. Could you please show me the path to this file? In case you could not, could you please tell me where the data from the first colum comes from in the code? Thank you so much again!
Hi! I'm new at neural networks and i'm having trouble discovering how to evaluate your implementation. By now I'm using an audio dataset which is different from your --eval_path and --eval_list, so I'm running this command:
python trainECAPAModel.py --eval --initial_model exps/pretrain.model --eval_list /eval_list_directory --eval_path /eval_path_directory
Is this the correct way to evaluate your implementation? Should I use any different argument? The point is I don't think I understand what exps/pretrain.model is, so I don't know how to use it.
Looking forward to your response! Thanks
You need to modify the code of evaluation based on your dataset. You can not just change the path.
Hi again!
I also have one more question. I have just seen your pdf on this implementation. On the 44th slide there is a picture of a file named "scores.txt". I'm having trouble finding where this file is. Could you please show me the path to this file? In case you could not, could you please tell me where the data from the first colum comes from in the code? Thank you so much again!
To summary, you need to prepare two things: The list of prediceted scores, the list of labels. For any dataset you used for evaluation, you have to generate these two list and put them into the function so that get the EER or minDCF. All datasets need to provide the list that similarly to trials.txt, and scores.txt is your prediction results.
Hi! I'm new at neural networks and i'm having trouble discovering how to evaluate your implementation. By now I'm using an audio dataset which is different from your --eval_path and --eval_list, so I'm running this command:
python trainECAPAModel.py --eval --initial_model exps/pretrain.model --eval_list /eval_list_directory --eval_path /eval_path_directory
Is this the correct way to evaluate your implementation? Should I use any different argument? The point is I don't think I understand what exps/pretrain.model is, so I don't know how to use it.
Looking forward to your response! Thanks