Closed csarron closed 2 years ago
@csarron Hi,
From the demo usage, I wonder if a sentence is tokenized by a BERT tokenizer, is it possible to align the tokens with the SDP parser output?
I'm sorry that currently the pretrained models only support word-level parsing. Also it might be hard to train a model able to parsing results with inter-word edges from scratch as annotations inside words are lacking in existing datasets.
This issue is stale because it has been open for 30 days with no activity.
Hi, this is a great library, thanks for the hard work!
From the demo usage, I wonder if a sentence is tokenized by a BERT tokenizer, is it possible to align the tokens with the SDP parser output?
Any pointers or suggestions are much appreciated!