DeveloperLiberationFront / AffectAnalysisToolEvaluation

SEmotion_18 paper on evaluating the reliability of sentiment and politeness analysis tools
3 stars 4 forks source link

Refine tool descriptions: Give author names and training corpus for all the tools #4

Closed nasifimtiazohi closed 6 years ago

nasifimtiazohi commented 6 years ago

R3

In the description of the tools the authors should shortly describe if the tools are dictionary-based or if they use a type of supervised model; for the latter a short mention of the type of artifacts with which the tools were trained would be useful to understand the differences in the application/training domain.

R2

In your tool selection section, some tools have the author of the tool mentioned while others don't. Perhaps state authors for all in this case.

nasifimtiazohi commented 6 years ago

fixed in https://github.com/DeveloperLiberationFront/AffectAnalysisToolEvaluation/commit/be8cbc2281231cd64691622609afde0903cd3aa0

nasifimtiazohi commented 6 years ago

Detecting Dictionary-based or supervised model is hard to gauge.

For example, SentiStrength mainly uses a lexicon list, but the list and the strength of each word was generated by machine learning through training data.

Same for other tools, while they use some learning, it's hard to summarize their algorithms.

I think I've given enough as par to my understanding.

Also, for Alchemy, I couldn't trace the original paper. The tool is from IBM.