-
Hi! I wonder if anyone has used the Wrapper to parse Chinese texts before?
I have the following code:
from stanford_corenlp_pywrapper import sockwrap
parser_path = "/Users/hbyan2/Downloads/stanford-…
-
Hello, because ontonotes contains Chinese dataset in addition to English dataset, but when I change to Chinese runtime it error the following message:
development: 0% 0/172 [00:00
-
This issue tracks an alternative way of accessing models through external .jar files
[Resources not found when referencing external .jar files](https://github.com/ikvmnet/ikvm-maven/issues/51)
-
1. In src/edu/stanford/nlp, there are two folders, /coref and /dcoref, what are the differences between them?
2. In src/edu/stanford/nlp/coref, there are src/edu/stanford/nlp/coref/hybrid, src/e…
-
I'm using KBP for relation extraction in Chinese. There is currently a model for Chinese according to the official introduction.
I added kbp annotator into `StanfordCoreNLP-chinese.properties` . When…
-
### Universal Dependencies
http://universaldependencies.org/
### 采用清华大学语义依存网络语料的20000句作为训练集。
http://www.hankcs.com/nlp/corpus/chinese-treebank.html#h3-6
### 汉语树库
http://www.hankcs.com/nlp/cor…
-
I was trying to run the GIS geocoding GUI with my NER tagged LOCATION csv file generated from the main GUI "Geographic maps: From texts to maps via Google Earth Pro and Google Maps". While it worked f…
-
Hi Louis
### Expected Behavior
I wanna the "Train.js" module can support multi NLP plugins, e.g. HanLP etc. @hankcs
### Actual Behavior
the "Train.js" module uses node-nlp library, it can'…
-
Hello, my name is Guoao Wei. I am a Chinese student interested in NLP and I can help with the Chinese language support for this amazing repository.
## About me
I received a bachelor's degree of …
-
I also have a lot of Documents written by Mandarin. Can you add this too?
iszhi updated
4 months ago