Hello! Do you happen to have any more information about which version of the Blitzer et al. multi-domain sentiment dataset you used for this? The version posted on Mark's website ( https://www.cs.jhu.edu/~mdredze/datasets/sentiment/ ) does not seem to contain any files that line up with what evaluation.py is looking for (multi-domain-sentiment_indomain_*.txt), and unlike the way that the script handles the SemEval dataset, it does not look like the code for generating the test/train split for the multi-domain data is part of this script. Any pointers would be most welcome!
Hello! Do you happen to have any more information about which version of the Blitzer et al. multi-domain sentiment dataset you used for this? The version posted on Mark's website ( https://www.cs.jhu.edu/~mdredze/datasets/sentiment/ ) does not seem to contain any files that line up with what
evaluation.py
is looking for (multi-domain-sentiment_indomain_*.txt
), and unlike the way that the script handles the SemEval dataset, it does not look like the code for generating the test/train split for the multi-domain data is part of this script. Any pointers would be most welcome!