-
From a directory with the corenlp jars and the chinese models jar:
```java -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLP -props StanfordCoreNLP-chinese.properties -annotators tokenize,ssplit,po…
-
I am working on a nlp project, and I used the chinese-whispers package. But I encountered this error:
File "/home/usr/.local/lib/python3.6/site-packages/chinese_whispers/__init__.py", line 44, in c…
-
# ❓ Questions & Help
## Details
Hi I'm trying to use 'fmikaelian/flaubert-base-uncased-squad' for question answering. I understand that I should load the model and the tokenizers. I'm not su…
-
Hi! I have some questions about using custom dictionary in `stanfordnlp`.
1. Can I use my own dictionary when I tokenize sentences using `pipline`?
2. I know `stanfordnlp` provides a Python wrapper …
-
**Is your feature request related to a problem? Please describe.**
In [this dataset](https://github.com/opendataby/vybary2019/blob/40ac7cdbf5298746ef1243844aec9764e74dfe7e/recon.ipynb) I need to fi…
-
- [ ] detokenization https://github.com/dmlc/gluon-nlp/pull/409/files#r234886365
- [ ] refactor string pre-processing logic in tokenization.py by reusing gluonnlp building blocks
- [ ] memory optim…
-
Dear Team,
Help me, please to sort out my case of new language adaption.
My corpus has about 102 thousands sentances starting from short expressions to the large sentences up to 173 characters.
T…
-
Hello, im a newbie and im trying to get to work with Rstudio for my final project. I got error while doing POS Tagging, i want to transform my .CSV data into a sentence. The code:
```
#menentuka…
-
I ran the following code:
from spacy import displacy
import zh_core_web_sm
nlp = zh_core_web_sm.load()
error Traceback (most recent call last)
in
…
-
text = "Dumplings are delicious."
nlp.load("xxx")
doc = nlp(text)
-----------------
doc.cats = {
"Shopping": {"score": 0.01},
"Food": …