Closed ShaRefOh closed 2 months ago
Might have something to do with batch size, it was set to 10 for the other models. It worked with 5.
Ok gpt-3.5 still raise the same error
@ShaRefOh I pushed a potential fix to nlp-dev
- feel free to try again and let me know if there are more issues. The implications of the fix are that the parser may return a failure for a given input, but still continue to the rest of the batch. Next I'll implement a monitoring of how many times this actually happens #82
@ronentk can you also catch these:
ValueError Traceback (most recent call last)
Cell In[33], line 2
1 # batch process
----> 2 results = multi_chain_parser.batch_process_ref_posts(inputs=inputs,active_list=["keywords", "topics"],batch_size=10)
File ~/sensemakers/nlp/notebooks/../desci_sense/shared_functions/parsers/multi_chain_parser.py:247, in MultiChainParser.batch_process_ref_posts(self, inputs, batch_size, active_list)
243 parallel_chain = self.create_parallel_chain(active_list)
245 logger.debug("Invoking parallel chain...")
--> 247 results = asyncio.run(
248 parallel_chain.abatch(
249 inst_prompts,
250 config=config,
251 )
252 )
253 cb.progress_bar.close()
255 # post processing results
File ~/Library/Python/3.11/lib/python/site-packages/nest_asyncio.py:35, in _patch_asyncio.<locals>.run(main, debug)
33 task = asyncio.ensure_future(main)
34 try:
---> 35 return loop.run_until_complete(task)
36 finally:
37 if not task.done():
...
--> 574 raise ValueError(response.get("error"))
576 for res in response["choices"]:
577 message = _convert_dict_to_message(res["message"])
ValueError: {'message': 'OpenAI: GPT-3.5 Turbo 16k requires moderation. Your input was flagged for "harassment". No credits were charged.', 'code': 403, 'metadata': {'reasons': ['harassment'], 'flagged_input': 'user: \nYou are an expert annotator tasked with ass...mPost\ntitle: Twitter post\nsummary: None\n\n# Output:'}}
And return the error with the full prompt? I changed the prompt and for some and for some reason I still see the same message with the same metadata
Anyway, it will be good to catch these if an LLM moderation would not parse a tweet.
hmm those should be caught as well, are you using the new version?
btw I managed to run GPT3.5 on the dataset
@ShaRefOh can we close this task?
Got the following error using gpt-3.5 and anthropic