Hello,
There is a line in wikiTokenize.py: "from nltk.tokenize import StanfordTokenizer"
But when I try to run the command as : python wikiTokenize.py filename.txt , it pops up a prompt that Cannot import name "StanfordTokenizer".
I install nltk by this command: pip install nltk. In the nltk installation path, /anaconda3/envs/py3/lib/python3.6/site-packages/nltk/tokenize, there is stanford_segmenter.py and stanford.py, but no StanfordTokenizer.py.
So can I ask how to fix this problem?
Ah, sorry for asking for help. I have already found the way to fix the issue:
it just need to change "from nltk.tokenize import StanfordTokenizer"
to
"from nltk.tokenize.stanford import StanfordTokenizer"
Hello, There is a line in wikiTokenize.py: "from nltk.tokenize import StanfordTokenizer" But when I try to run the command as : python wikiTokenize.py filename.txt , it pops up a prompt that Cannot import name "StanfordTokenizer". I install nltk by this command: pip install nltk. In the nltk installation path, /anaconda3/envs/py3/lib/python3.6/site-packages/nltk/tokenize, there is stanford_segmenter.py and stanford.py, but no StanfordTokenizer.py.
So can I ask how to fix this problem?