ktynski / Marketing_Automations_Notebooks_With_GPT

A collection of automations and experiments exploring the applications of generative AI in Marketing, SEO, and Public Relations
358 stars 80 forks source link

How do I fix this error? [nltk_data] Error loading trigram_collocations: Package #2

Closed hammad786 closed 1 year ago

hammad786 commented 1 year ago

I'm getting this error while running Long Form content module

[nltk_data] Error loading trigram_collocations: Package [nltk_data] 'trigram_collocations' not found in index [nltk_data] Error loading quadgram_collocations: Package [nltk_data] 'quadgram_collocations' not found in index

ktynski commented 1 year ago

This shouldn't impact the script. To fix the error, please see here: https://stackoverflow.com/questions/34230592/nltk-quadgram-collocation-finder

hammad786 commented 1 year ago

Thanks so much for the feedback. I'm still getting error at content generation. Tried locally and at Google Collab as well.

--

KeyError Traceback (most recent call last) ~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 2896 try: -> 2897 return self._engine.get_loc(key) 2898 except KeyError:

pandas_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: 'Article Text'

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last)

in ----> 1 main("Fly Fishing in Colorado") in main(topic, model, max_tokens_outline, max_tokens_section, max_tokens_improve_section) 113 query = topic 114 results = analyze_serps(query) --> 115 summary = summarize_nlp(results) 116 117 semantic_readout = generate_semantic_improvements_guide(topic, summary, model=model, max_tokens=max_tokens_outline) in summarize_nlp(df) 171 total_results = len(df) 172 # Calculate the average length of the article text --> 173 avg_length = round(df['Article Text'].apply(len).mean(), 2) 174 # Get the most common words across all search results 175 all_words = ', '.join(df['Most Common Words'].sum().split(', ')) ~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 2978 if self.columns.nlevels > 1: 2979 return self._getitem_multilevel(key) -> 2980 indexer = self.columns.get_loc(key) 2981 if is_integer(indexer): 2982 indexer = [indexer] ~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 2897 return self._engine.get_loc(key) 2898 except KeyError: -> 2899 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance) 2901 if indexer.ndim > 1 or indexer.size > 1: pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'Article Text' -- ![image](https://github.com/ktynski/Marketing_Automations_Notebooks_With_GPT/assets/9767704/c26e931a-0c0a-488f-a481-df82bceec25d)