Closed ml-nic closed 2 years ago
Thanks @ml-nic and sorry for the delayed reply.
You can use the analyse.sh
script to get insights on how predicates and single tokens were translated:
sh analyse.sh <dataset> <NL test set> <SPARQL test set>
The filter_dataset.py
script is used to generate smaller training sets. You need to pass some parameters. The json file contains a dictionary with a counter for each resource,
python filter_dataset.py --dataset <dataset> --used_resources used_resources.json --minimum <min> --comp <all|any>
Hello together
First of all, thank you for sharing the code of this project. I was able to train the model and make some predictions, but now I want to find the shortcomings of the model, so I want to analyze on which questions/queries the model performs well.
I found the "analyse.sh" script and the "filter_dataset.py". Now I want to ask you what's the purpose of these files and how to use them.
Thank you for your time Kind regards Nicolas