Closed AhmetCakar closed 3 years ago
I'd for sure be willing to talk this over with you. Could you send along the code you're running?
I tried to do it based on your example; text_corpus = prepare_data( data=pd_tweets, target_cols="text_process", input_language=input_language, min_token_freq=0, # 0 for BERT min_token_len=0, # 0 for BERT remove_stopwords=False, # False for BERT verbose=True, )
`
num_keywords = 5 num_topics = 5
corpus_nongrams = [ " ".join([t for t in text.split(" ") if "" not in t]) for text in text_corpus ]`
bert_kws = extract_kws( method="BERT", bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens", text_corpus=corpus_no_ngrams, input_language=input_language, output_language=None, num_keywords=num_keywords, num_topics=num_topics, ignore_words=ignore_words, prompt_remove_words=False, show_progress_bar=True, batch_size=32, )
While trying to do bert_kws, I get the error I mentioned when different percentages are filled in that process.
Thanks for sending along more information :)
This looks to be an issue with how kwx
is interacting with sentence-transformers
. You seem to not have the xlm-r-bert-base-nli-stsb-mean-tokens
model where it's supposed to be? sentence-transformers
would have been installed with kwx
if you used pip. Did you by chance clone the repository, but then not also do the steps to install all the dependencies?
Also, I'm really not sure what you mean by your percents in the original question - 96, 100, and 26. Could you explain what you mean by those a bit better?
Thank your for help. I solved issue. I was delete torch file in my system and no much problem. Problem is solved.
I get this error in different percentages while trying to make keyword extraction with BERT. For example, 96 percent gave this error first, then 100 percent gave this error. The last 26 percent gave this error. Can you help me?