Open ahalterman opened 6 years ago
@JingL1014, didn't we already do already do this with Sayeed? Did that code get pushed back here yet?
Following up on this. Are any of these (1, 2, 3, 4) complete? This is necessary for us to produce Arabic event data.
We have updated UniversalPetrarch to consume JSON formatted data. The UD-Petrarch coder for English is running side-by-side with Petrarch2 event coder in the SPEC Pipeline.
We are not using stanford-pipeline project to generate event coding from raw text data. Instead, we running our distributed framework. So we haven't tested adding UD-Petrarch to that pipeline.
We currently use ufal-udpipe package for Python to do the parsing. https://pypi.org/project/ufal.udpipe. So use the parser, we need to install the package and download language-specific model files which can be automated. (i.e using requirements.txt and some programming)
Yes, we are already using UD-Petrarch to code the English sentences and both input and output are in JSON format. Any incompatibility with OEDA format can be addressed. We are using MongoDB to store that data.
Now that UP is a little more stable, we need to start thinking about making it usable in production pipelines. In order for people (specifically the Spanish and Arabic teams) to be able to produce event data, UP needs to fit into our existing pipelines. This requires a few things: