k4l1sh / alexa-gpt

A tutorial on how to use ChatGPT in Alexa
MIT License
204 stars 44 forks source link

Error import OpenAI #12

Closed pisolofin closed 7 months ago

pisolofin commented 7 months ago

I tried you code but I get an error when invoking the skill. I removed all calls to OpenAI and the skill is invoked correctly.

When I add from openai import OpenAI to lambda_function.py I get the error.

In the requirements.txt file I add openai==1.3.3 and I tried also with version 1.12.0, but nothing.

Can someone help me? Thanks.

k4l1sh commented 7 months ago

Looks like it's still working with version 1.3.3 Could you provide more details of the error? You can see the error in the Test tab after simulating a chat. Also, be sure to save requirements.txt and lambda_functions.py before deploying and put them in the same "lambda" folder.

pisolofin commented 7 months ago

Log in Test tab? I add Device Log option, but i receive only that image

I tried all versions, 1.3.3 and 1.12.0 because I thought the old version was outdated.

I also tried to download the "OpenAI" code and add it to my Skill, but I have problems with the "httpx" dependency, so I think the dependencies for my Skill don't work. image

Yes, requirements.txt and lambda_functions.py are in the same folder and I saved them. image

k4l1sh commented 7 months ago

In the test tab I was talking about the json output you receive to see the detailed errors. But if it is indeed an import error, you must click on CloudWatch Logs in the Code tab to see detailed information

When sending a new message to Alexa, you can see the logs under the /aws/lambda group. If you have an import error, you should see something like: [ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'openai' Traceback (most recent call last)

Also, in the /aws/codebuild group of logs, if it is indeed a import error, you should see something saying it wasn't able to download or install openai after building the code. For example, when I build I receive the log: Successfully installed annotated-types-0.6.0 anyio-3.7.1 ask-sdk-core-1.11.0 ask-sdk-model-1.82.0 ask-sdk-runtime-1.19.0 boto3-1.9.216 botocore-1.12.253 certifi-2024.2.2 charset-normalizer-3.3.2 distro-1.9.0 docutils-0.15.2 exceptiongroup-1.2.0 h11-0.14.0 httpcore-1.0.2 httpx-0.26.0 idna-3.6 jmespath-0.10.0 openai-1.3.3 pydantic-2.6.1 pydantic-core-2.16.2 python-dateutil-2.8.2 requests-2.31.0 s3transfer-0.2.1 six-1.16.0 sniffio-1.3.0 tqdm-4.66.2 typing-extensions-4.9.0 urllib3-1.25.11

pisolofin commented 7 months ago

In the /aws/lambda I have error to import

[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'openai'
Traceback (most recent call last):

But i don't have /aws/codebuild

k4l1sh commented 7 months ago

So maybe the requirements.txt code is not being deployed or there is a block in downloading the openai module. Codebuild logs should be checked to debug more accurately.

I would try creating the skill again by changing the hosting region to another part of the US and creating the skill directly from this repo like at https://github.com/k4l1sh/alexa-gpt?tab=readme-ov-file#5 Do you know of anything in your developer account that could be blocking the connection to PyPI to download the OpenAI module?

Also, make sure to click "Save" the requirements.txt and "Deploy" after putting in the OpenAI API key.

Another solution would be to remove openai from requirements.txt and modify the code to call the OpenAI API with requests or another module, since the OpenAI API can be called with POST requests. In this case the function generate_gpt_response() should be changed.

pisolofin commented 7 months ago

I tried all servers EU Ireland, US Virginia and US Oregon. With each service I can't import your repository, so I create the skill from Scratch.

I get the same error every time :(

k4l1sh commented 7 months ago

Now, creating from scratch I was able to reproduce your error. In fact, there is something making it impossible to import openai module. Let's change for requests, I tested and it worked.

Change your requirements.txt for this:

ask-sdk-core==1.11.0
boto3==1.9.216
requests

And in your lambda_functions.py, change the first lines, from this:

import logging
import ask_sdk_core.utils as ask_utils
from openai import OpenAI
from ask_sdk_core.skill_builder import SkillBuilder
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_core.dispatch_components import AbstractExceptionHandler
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response

# Set your OpenAI API key
client = OpenAI(
    api_key="YOUR_API_KEY"
)

To this, setting your API key:

import logging
import ask_sdk_core.utils as ask_utils
import requests
import json
from ask_sdk_core.skill_builder import SkillBuilder
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_core.dispatch_components import AbstractExceptionHandler
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response

# Set your OpenAI API key
api_key="YOUR_API_KEY"

Also, replace the generate_gpt_response() function for this:

def generate_gpt_response(chat_history, new_question):
    headers = {
        "Authorization": f"Bearer {api_key}",
        "Content-Type": "application/json"
    }
    url = "https://api.openai.com/v1/chat/completions"
    messages = [{"role": "system", "content": "You are a helpful assistant."}]
    for question, answer in chat_history[-10:]:
        messages.append({"role": "user", "content": question})
        messages.append({"role": "assistant", "content": answer})
    messages.append({"role": "user", "content": new_question})

    data = {
        "model": "gpt-3.5-turbo-1106",
        "messages": messages,
        "max_tokens": 300,
        "n": 1,
        "temperature": 0.5
    }
    try:
        response = requests.post(url, headers=headers, data=json.dumps(data))
        response_data = response.json()
        return response_data['choices'][0]['message']['content']
    except Exception as e:
        return f"Error generating response: {str(e)}"

With all these changes you will be able to make it work. Later I will adjust and commit the changes in this repository to make the call to OpenAI by requests and also i am going to see what is this error of importing this skill directly from GitHub.

pisolofin commented 7 months ago

Thanks, I'm now able to run your script :)

Another little note, sometimes I get an error Error generating response: 'choices'

Thanks again for the support.

pisolofin commented 7 months ago

I check the log, the response message is Response [429]. Now I'm using free TIER and I think I have a request limit, but I don't know how much it is.

k4l1sh commented 7 months ago

Yes, free tier must be the problem. You have 3 rpm.

https://platform.openai.com/docs/guides/rate-limits/free-tier-rate-limits

pisolofin commented 7 months ago

I updated my plan and it works very well now. Thanks for the support and this guide.

k4l1sh commented 7 months ago

Fixed in commit f38b8fb