miso-belica / sumy

Module for automatic summarization of text documents and HTML pages.
https://miso-belica.github.io/sumy/
Apache License 2.0
3.46k stars 525 forks source link

sumbasic: KeyError #176

Open mrx23dot opened 2 years ago

mrx23dot commented 2 years ago

sumbasic failed on text: common.txt

Traceback (most recent call last):
  File "summerisers.py", line 39, in <module>
    summary = " ".join([obj._text for obj in s(parser.document, sentenceCntOut)])
  File "C:\py38_64\lib\site-packages\sumy\summarizers\sum_basic.py", line 27, in __call__
    ratings = self._compute_ratings(sentences)
  File "C:\py38_64\lib\site-packages\sumy\summarizers\sum_basic.py", line 110, in _compute_ratings
    best_sentence_index = self._find_index_of_best_sentence(word_freq, sentences_as_words)
  File "C:\py38_64\lib\site-packages\sumy\summarizers\sum_basic.py", line 92, in _find_index_of_best_sentence
    word_freq_avg = self._compute_average_probability_of_words(word_freq, words)
  File "C:\py38_64\lib\site-packages\sumy\summarizers\sum_basic.py", line 75, in _compute_average_probability_of_words
    word_freq_sum = sum([word_freq_in_doc[w] for w in content_words_in_sentence])
  File "C:\py38_64\lib\site-packages\sumy\summarizers\sum_basic.py", line 75, in <listcomp>
    word_freq_sum = sum([word_freq_in_doc[w] for w in content_words_in_sentence])
KeyError: 'look'

sumy==0.10.0

slvcsl commented 1 year ago

Hi! Any news on this? Thanks a lot for your work!

mrx23dot commented 1 year ago

Maybe this could help word_freq_in_doc.get(w, 0) I guess it encounter a word not in dict.

slvcsl commented 1 year ago

My understanding is that it is because _get_content_words_in_sentence and _get_all_content_words_in_doc use a different preprocessing.

I modified _get_all_content_words_in_doc to have the same preprocessing as in:

def _get_all_content_words_in_doc(self, sentences):
        normalized_words = []
        for s in sentences:
            normalized_words += self._normalize_words(s.words)
        normalized_content_words = self._filter_out_stop_words(normalized_words)
        stemmed_normalized_content_words = self._stem_words(normalized_content_words)
        return stemmed_normalized_content_words

It works now, but I still had no time to double-check that this is the correct solution.

tezer commented 1 year ago

Same error from the docker version:

Traceback (most recent call last):
  File "/usr/local/bin/sumy", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/site-packages/sumy/__main__.py", line 70, in main
    for sentence in summarizer(parser.document, items_count):
  File "/usr/local/lib/python3.10/site-packages/sumy/summarizers/sum_basic.py", line 27, in __call__
    ratings = self._compute_ratings(sentences)
  File "/usr/local/lib/python3.10/site-packages/sumy/summarizers/sum_basic.py", line 110, in _compute_ratings
    best_sentence_index = self._find_index_of_best_sentence(word_freq, sentences_as_words)
  File "/usr/local/lib/python3.10/site-packages/sumy/summarizers/sum_basic.py", line 92, in _find_index_of_best_sentence
    word_freq_avg = self._compute_average_probability_of_words(word_freq, words)
  File "/usr/local/lib/python3.10/site-packages/sumy/summarizers/sum_basic.py", line 75, in _compute_average_probability_of_words
    word_freq_sum = sum([word_freq_in_doc[w] for w in content_words_in_sentence])
  File "/usr/local/lib/python3.10/site-packages/sumy/summarizers/sum_basic.py", line 75, in <listcomp>
    word_freq_sum = sum([word_freq_in_doc[w] for w in content_words_in_sentence])
KeyError: 'own'
nefastosaturo commented 4 months ago

Hello there.

I encountered this error too.

The problems are in the two functions in sum_basic.py _get_content_word_in_sentence and _get_all_content_words_in_doc but mostly here

The different steps in those functions creates two different set/list of words due by the stop words list called befor or after normalization or stemmer. Also _get_all_words function calls the stemmer too, creating confusion for the stop word filtering.

So I just changed them like that:


    def _get_all_words_in_doc(self, sentences):
        # return self._stem_words([w for s in sentences for w in s.words])
        return [w for s in sentences for w in s.words]

    def _get_content_words_in_sentence(self, sentence): 
        # firstly normalize
        normalized_words = self._normalize_words(sentence.words) 
        # then filter out stop words
        normalized_content_words = self._filter_out_stop_words(normalized_words)
        # then stem
        stemmed_normalized_content_words = self._stem_words(normalized_content_words)
        return stemmed_normalized_content_words

    def _get_all_content_words_in_doc(self, sentences):
        all_words = self._get_all_words_in_doc(sentences)
        normalized_words = self._normalize_words(all_words)
        normalized_content_words = self._filter_out_stop_words(normalized_words)
        stemmed_normalized_content_words = self._stem_words(normalized_content_words)
        return stemmed_normalized_content_words