jjacampos / FeedbackWeightedLearning

MIT License
2 stars 0 forks source link

Unable to achieve published result of the paper #2

Open pavlostheodorou opened 3 years ago

pavlostheodorou commented 3 years ago

I had some troubles trying to replicate your results

jjacampos commented 3 years ago

Could you give further information about troubles you faced?

pavlostheodorou commented 3 years ago

i downloaded the code and the needed data. I run a script to split the DBPEDIA_train.csv data (with the help of the train_indexes.txt and deployment_indexes.txt) to extract the train.csv and the deployment.csv data reespectively. Then I run the 'feedback_weighted_learning.py' script.

The results I get are: S0 System evaluation -> F1: 0.9656051584037898 Deployment with feedback evaluation -> F1: 0.9809027206632234 Deployment with supervised evaluation -> F1: 0.982498272855874 results

I also was wondering what is the purpose of the confusion matrix of each method. What does it represent?

jjacampos commented 3 years ago

I see that in your experiments you are using 9 categories from DBpedia Classes dataset but as we explain in the paper (subsection 4.1) we use 219 categories.

The confussion matrix gives more information about the performance of the algorithm, so we put it there as log information (https://en.wikipedia.org/wiki/Confusion_matrix).

pavlostheodorou commented 3 years ago

how can I use the 219 categories?

pavlostheodorou commented 3 years ago

I don't know if it is the right way to do it, but i manage to use the 219 categories by changing the columns in the .csv files