I have finished setting up the framework for Spark and CoreNLP, including the annotator dependency stuff.
What we need to do:
Take a look at CoreNLP.java, line 36-50. Those are the functionalities we need to implement (at least most of them).
Same file, line 155-183, this is the major part. Each line is read in as a pair (Index, Line), and the expected output is an iterator of Array: [((Index, Func0), Output0), ..., ((Index, FuncN), OutputN))]. I have implemented two (easiest) functionalities and the others should be done in the same way. Don't underestimate this. StanfordNLP is very badly documented and this can be nasty.
After these functionalities have been done, at line 184 (after the flatMap), group by functionalities and then sort by Index. The output of different functionalities should go into different folders, but the order of sentences must be unchanged in each folder.
I have finished setting up the framework for Spark and CoreNLP, including the annotator dependency stuff.
What we need to do: