请问在token.py中执行创建词袋的步骤时,报如下的错是为什么呢?stackoverflow上的方法都行不通
Traceback (most recent call last):
File "feature_extract.py", line 51, in
tokens = token.get_tokens()
File "/home/xfbai/Entity-Relation-SVM-master/new_token.py", line 65, in get_tokens
X_train_counts = vectorizer.fit_transform(cut_docs)
File "/home/xfbai/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 1031, in fit_transform
self.fixedvocabulary)
File "/home/xfbai/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 962, in _count_vocab
raise ValueError("empty vocabulary; perhaps the documents only"
ValueError: empty vocabulary; perhaps the documents only contain stop words
请问在token.py中执行创建词袋的步骤时,报如下的错是为什么呢?stackoverflow上的方法都行不通 Traceback (most recent call last): File "feature_extract.py", line 51, in
tokens = token.get_tokens()
File "/home/xfbai/Entity-Relation-SVM-master/new_token.py", line 65, in get_tokens
X_train_counts = vectorizer.fit_transform(cut_docs)
File "/home/xfbai/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 1031, in fit_transform
self.fixedvocabulary)
File "/home/xfbai/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 962, in _count_vocab
raise ValueError("empty vocabulary; perhaps the documents only"
ValueError: empty vocabulary; perhaps the documents only contain stop words
谢谢