I've found that the default validation metric for the supervised method is based on the word translation accuracy of the evaluation dictionary (dico_eval), and its default dictionary is the one provided as the test set ('lang1-lang2.5000-6500.txt').
I think this is a problem since a lot of work uses the dictionary to evaluate their model, and that means that the model is tuned on the test data. I suppose the default validation metric for the supervised method should be VALIDATION_METRIC_UNSUP, i.e. 'mean_cosine-csls_knn_10-S2T-10000', which does not employ any external data. Alternatively, the default dictionary used for the validation should be the same as the train data.
I've found that the default validation metric for the supervised method is based on the word translation accuracy of the evaluation dictionary (dico_eval), and its default dictionary is the one provided as the test set ('lang1-lang2.5000-6500.txt').
I think this is a problem since a lot of work uses the dictionary to evaluate their model, and that means that the model is tuned on the test data. I suppose the default validation metric for the supervised method should be VALIDATION_METRIC_UNSUP, i.e. 'mean_cosine-csls_knn_10-S2T-10000', which does not employ any external data. Alternatively, the default dictionary used for the validation should be the same as the train data.