Hi,
short question for running the repo with my own texts. Do the texts that I want to evaluate have to be in the "ant_data" format??
That "ant_data" comes with annotations, so if we have to annotate our texts the metric it's not anymore an Unreferenced metric as claimed in the paper. I just ask this because when running the repo the number of outputs is 400. The same that the number of texts in the ant_data.txt.
Just seeing the repo I thought that the texts that we want to analyze should go in the init_data folder. And split them up in train, dev and test. But apparently, it's not like that.
Could you then please confirm the correct way to run the metric in our own texts :)
Thanks so much for your help. I really appreciate and love your work.
Have a nice day,
Hi, short question for running the repo with my own texts. Do the texts that I want to evaluate have to be in the "ant_data" format??
That "ant_data" comes with annotations, so if we have to annotate our texts the metric it's not anymore an Unreferenced metric as claimed in the paper. I just ask this because when running the repo the number of outputs is 400. The same that the number of texts in the ant_data.txt.
Just seeing the repo I thought that the texts that we want to analyze should go in the init_data folder. And split them up in train, dev and test. But apparently, it's not like that.
Could you then please confirm the correct way to run the metric in our own texts :) Thanks so much for your help. I really appreciate and love your work. Have a nice day,
Victor.