Open jakelaws1 opened 11 months ago
Hey Jake! Thanks for bringing this up, I'll look into it. The above error message usually happens when OpenAI API has a longer outage or there might be issues with the key. We also have updated the package-release, which might help solve the issue, could you confirm please what version of Monkey-Patch are you running?
Thank you so much! I was on 0.0.9 and just upgraded to 0.0.10, using python 3.9.7 and I am still facing the same issue.
I tested a separate script with the same key hitting the OpenAI API directly and it was working as well, which made me think the key wasn't the issue. Let me know if there's anything else I can test or information to provide!
openai.api_key = api_key
def generate_workouts():
prompt = {
"role": "user",
"content": "Generate a list of 100 unique exercises with the following attributes in a json response - attributes are "\
"workout name as string, description as string, equipment needed as a string list, and muscle group as a list string"
}
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages = [prompt],
max_tokens=4000,
temperature=0.7
)
return response.choices
Hey @jakelaws1, could you please paste your full code? Below is what I am (successfully running). There are a few syntax errors in the code you posted at the top, here is what the full code should look like:
import os
import openai
from dotenv import load_dotenv
from pydantic import Field
from typing import Annotated
from monkey_patch.monkey import Monkey as monkey
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
@monkey.patch
def score_sentiment(input: str) -> Annotated[int, Field(gt=0, lt=10)]:
"""
Scores the input between 0-10
"""
@monkey.align
def align_score_sentiment():
"""Register several examples to align your function"""
assert score_sentiment("I love you") == 10
assert score_sentiment("I hate you") == 0
assert score_sentiment("You're okay I guess") == 5
# This is a normal test that can be invoked
def test_score_sentiment():
"""We can test the function as normal using Pytest or Unittest"""
assert score_sentiment("I like you") == 7
if __name__ == '__main__':
align_score_sentiment()
print(score_sentiment("I like you")) # 7
print(score_sentiment("Apples might be red")) # `
(Note, I'm using python-dotenv
to load the OPENAI_API_KEY from the .env file. This is optional, you can hardcode the openai if you like)
I copy and pasted that exact code and ran it and I am still getting the same error. I created a separate virtual environment as well to ensure incorrect packages weren't causing issues. Right after running this script I ran a separate OpenAI API script directly in that same virtual environment with success (to ensure no key issues).
Let me know if there's any additional information I can provide or things to test out.
python = "^3.11" monkey-patch-py = "^0.0.10" openai = "0.28.1"
Please see the errors below.
Traceback (most recent call last):
File "/Users/jacoblaws/Development/python/eleva/gpt_experiments/monkey_patch_descriptions.py", line 34, in <module>
print(score_sentiment("I like you")) # 7
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/monkey.py", line 217, in wrapper
output = Monkey.language_modeler.generate(args, kwargs, Monkey.function_modeler, function_description)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/language_models/language_modeler.py", line 38, in generate
choice = self.synthesise_answer(prompt, model, model_type, llm_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/language_models/language_modeler.py", line 48, in synthesise_answer
return self.api_models[model_type].generate(model, self.system_message, prompt, **llm_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/language_models/openai_api.py", line 80, in generate
raise Exception("OpenAI API failed to generate a response")
Exception: OpenAI API failed to generate a response
Okay brilliant, I think I have the information necessary to replicate this. I will try when I’m back at my machine!Sent from my iPhoneOn 20 Nov 2023, at 10:52, Jake Laws @.*> wrote:
I copy and pasted that exact code and ran it and I am still getting the same error. I created a separate virtual environment as well to ensure incorrect packages weren't causing issues. Right after running this script I ran a separate OpenAI API script directly in that same virtual environment with success (to ensure no key issues).
Let me know if there's any additional information I can provide or things to test out.
python = "^3.11"
monkey-patch-py = "^0.0.10"
openai = "0.28.1"
Please see the errors below.
Traceback (most recent call last):
File "/Users/jacoblaws/Development/python/eleva/gpt_experiments/monkey_patch_descriptions.py", line 34, in
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Hi Jake, this is quite baffling to us as we can't reproduce the error at all. Just to double-check, the openai key in the .env file is under "OPENAI_API_KEY" key?
We also pushed up an addition where you should now get informative openai error messages apart from the generic one, which should help in debugging issues. It's not in the pip release yet (as we're including a couple of additional enhancements) but its in the master branch, if you want to fork the branch and run it then let me know what the error is! I'll keep you posted as well once we've updated the pip package.
I am using the exact code provided in an example and I have an existing openAI API key that I have tested in another script calling openAI directly.
Please help me understand why it's providing a generic exception response "Exception: OpenAI API failed to generate a response"
`@monkey.patch def score_sentiment(input: str) -> Annotated[int, Field(gt=0, lt=10)]: """Scores the input between 0-10"""
@monkey.align def align_score_sentiment(): """Register several examples to align your function""" assert score_sentiment("I love you") == 10 assert score_sentiment("I hate you") == 0 assert score_sentiment("You're okay I guess") == 5
def test_score_sentiment(): """We can test the function as normal using Pytest or Unittest""" score = score_sentiment("I like you") assert score >= 7
if name == "main": align_score_sentiment() print(score_sentiment("I like you")) # 7 print(score_sentiment("Apples might be red")) # `
Traceback (most recent call last): File "/Users/jacoblaws/Development/python/eleva/gpt_experiments/monkey_patch_descriptions.py", line 46, in <module> print(score_sentiment("I like you")) # 7 File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/monkey.py", line 206, in wrapper output = Monkey.language_modeler.generate(args, kwargs, Monkey.function_modeler, function_description) File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/language_models/language_modeler.py", line 31, in generate choice = self.synthesise_answer(prompt, model, model_type, llm_parameters) File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/language_models/language_modeler.py", line 41, in synthesise_answer return self.api_models[model_type].generate(model, self.system_message, prompt, **llm_parameters) File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/language_models/openai_api.py", line 70, in generate raise Exception("OpenAI API failed to generate a response") Exception: OpenAI API failed to generate a response