eli64s / readme-ai

README file generator, powered by large language model APIs 👾
MIT License
1.49k stars 160 forks source link

Application unable to accept GPT API key #77

Closed johmicrot closed 10 months ago

johmicrot commented 10 months ago

Perhaps this is related to issue #76 , but I am unable to get the application to recognise my paid API key. I've attempted with both docker and conda. I'm on Ubuntu 22.04.3 with python 3.9.18

Using Docker sudo docker run -it -e OPENAI_API_KEY=Key_went_here -v "$(pwd)":/app zeroxeli/readme-ai:latest readmeai -o readme-ai.md -r Repo_went_here

Using Conda First export OPENAI_API_KEY=YOUR_API_KEY Then conda create -n readmeai python=3.9 and I enter the conda enviornment with source activate readmeai and install requirements.txt with pip install -r requirements.txt

Finally running the application with readmeai --output readme-ai.md --repository Repo_goes_here

Both implementations give me "ERROR HTTPStatus Exception: Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions' For more information check: https://httpstatuses.com/404 with the link saying "{ "error": { "message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.", "type": "invalid_request_error", "param": null, "code": null } }"

I even tried to modify the https://github.com/eli64s/readme-ai/blob/4cc83dc49e0f5bf3b304d504394a9adf7c028acb/readmeai/core/model.py#L176C1-L176C75 to directly include my API key as a string, but this did not fix the issue.

eli64s commented 10 months ago

Hi @johmicrot, thanks for reporting the issue. Have you tried installing the PyPI package readmeai directly? Can you run the steps below and report back when you get a moment?

pip install readmeai --upgrade

export OPENAI_API_KEY="your-api-secret-key"

readmeai -r <repository-url>

Thank you, Eli

eli64s commented 10 months ago

@johmicrot one other thing to check, if you manually cloned the repo and are using conda, you need to use the command below to run the program:

python3 -m readmeai.cli.commands -r <repository-url>

Using the readmeai command can only be used if you installed the readmeai PyPI package.

See the project's README - running readme-ai section for more details on how to run the tool.

Thank you, Eli

johmicrot commented 10 months ago

Sorry I meant to mention that i did also try it with pip using the commands you specified (outside of conda), and the python3 -m readmeai.cli.commands -r <repository-url> command (inside conda) also does not work and gives the same error.

I should have specified that it does generate the report, but it seems to be equivalent to the offline report.

eli64s commented 10 months ago

@johmicrot in your docker command are you setting the api key as OPENAI_API_KEY=$OPENAI_API_KEY

For example, you can try running the following on one of my example repos.

docker pull zeroxeli/readme-ai:latest 

export OPENAI_API_KEY="your-api-secret-key"

docker run -it -e OPENAI_API_KEY=$OPENAI_API_KEY -v "$(pwd)":/app zeroxeli/readme-ai:latest readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui

Also have you tried creating a new api key on OpenAI's site?

johmicrot commented 10 months ago

@eli64s, I have created a new API specifically for this demo. I've executed all of the commands you provided, and actually i get a new error!

After pulling the latest image, exporting the key, and running the following docker command

sudo docker run -it -e OPENAI_API_KEY=$OPENAI_API_KEY -v "$(pwd)":/app zeroxeli/readme-ai:latest readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui I get this response

exec /home/tempuser/.local/bin/readmeai: exec format error

So it appears something with the latest push broke it in my system.

Some possible causes that I'm aware of : Mismatched Architecture, Corrupted Image, Incorrect Shebang in Script (e.g. If /home/tempuser/.local/bin/readmeai is a script, it might have an incorrect shebang line), or possibly Binary Built for Different OS.

I pulled the repository and build the image on my system with sudo docker build -t rmai .

then I run it with sudo docker run -it -e OPENAI_API_KEY=$OPENAI_API_KEY rmai readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui

I get the same error, but here i'll provide the full output of the error

INFO     Processing github.com: https://github.com/eli64s/readmeai-ui
INFO     Setting LLM engine to: gpt-4
INFO     Saving output file as: readme-ai.md
DEBUG    Ignoring file: .gitignore
DEBUG    Ignoring file: LICENSE
DEBUG    Ignoring file: README.md
DEBUG    Ignoring file: __init__.py
INFO     Repository tree: 
## 📂 Repository Structure

sh
└── readmeai-ui/
    ├── poetry.lock
    ├── pyproject.toml
    ├── requirements.txt
    ├── scripts/
    │   └── clean.sh
    └── src/
        ├── app.py
        └── utils.py

---

DEBUG    Ignoring path: 
DEBUG    Ignoring file: README.md
DEBUG    Ignoring file: .gitignore
DEBUG    Ignoring file: LICENSE
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: .sample
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring path: 
DEBUG    Ignoring file: __init__.py
INFO     Tokenizing content from source: github.com
INFO     context: black
flake8
isort
readmeai
streamlit
INFO     Dependency file found: requirements.txt
INFO     Dependency file found: pyproject.toml
INFO     Dependency file found: poetry.lock
INFO     Dependencies: ['', 'black', 'python', 'streamlit', 'charset-normalizer', 'jsonschema-specifications', 'platformdirs', 'toolz', 'pathspec', 'pyarrow', 'jsonschema', 'importlib-metadata', 'pygments', 'attrs', 'colorama', 'pytest', 'python-dateutil', 'mccabe', 'smmap', 'typing-extensions', 'zipp', 'packaging', 'clean.sh', 'rich', 'mypy-extensions', 'pytz', 'shell', 'tzlocal', 'protobuf', 'requirements.txt', 'referencing', 'blinker', 'sh', 'markdown-it-py', 'gitpython', 'pyflakes', 'tenacity', 'exceptiongroup', 'mdurl', 'six', 'readmeai', 'markupsafe', 'app.py', 'toml', 'utils.py', 'lock', 'txt', 'urllib3', 'py', 'gitdb', 'numpy', 'iniconfig', 'jinja2', 'watchdog', 'tomli', 'tornado', 'pillow', 'poetry.lock', 'click', 'pycodestyle', 'pydeck', 'cachetools', 'pandas', 'flake8', 'certifi', 'requests', 'altair', 'rpds-py', 'tzdata', 'pyproject.toml', 'pluggy', 'validators', 'text', 'isort', 'idna']
WARNING  Truncating tokens: 49995 > 7999
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
ERROR    HTTPStatus Exception:
Client error '404 Not Found' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/404
INFO     Python setup guide: ['pip install -r requirements.txt', 'python main.py', 'pytest']
INFO     README file generated at: readme-ai.md
INFO     README-AI execution complete.
Ashu0Singh commented 10 months ago

@johmicrot Assumption: I assume you're using a free OpenAI API key.

The application uses gpt-4-1106-preview, which is not available for free accounts. To resolve this, either upgrade to a paid account or use the -m flag to change the model to gpt-3.5-turbo-1106. However, be aware of #61 if your repository is large, as it might cause errors. Switching to a pay-per-use account is recommended.

sudo docker run -it -e OPENAI_API_KEY=$OPENAI_API_KEY rmai readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui -m gpt-3.5-turbo-1106
johmicrot commented 10 months ago

@Ashu0Singh No, i actually have a paid account.

Interestingly when i specify sudo docker run -it -e OPENAI_API_KEY=$OPENAI_API_KEY rmai readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui -m gpt-3.5-turbo-1106
I get the "429 Too Many Requests"

Then when i specify sudo docker run -it -e OPENAI_API_KEY=$OPENAI_API_KEY rmai readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui -m gpt-4-1106-preview (or even just -m gpt-4, the default) then i get the 404 Not Found,

I just realized the error "You didn't provide an API key" I see when clicking on the "https://api.openai.com/v1/chat/completions" is actually not related to the run I just made, it is the HTTP error for clicking on the link. But the two errors above still stand.

Ashu0Singh commented 10 months ago

@johmicrot I was able to replicate the error you're facing, and it seems to be specific to Docker. You can try resolving it without Docker using the following steps:

pip install readmeai
export OPENAI_API_KEY=YOURAPIKEY
readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui -m gpt-3.5-turbo-1106

Note: While this might work for small files, it might encounter issues with large files due to token size limitations.

Screenshot 2023-11-29 at 3 05 13 AM

For the rate limit error, the issue could be:

Refer to the OpenAI documentation on rate limits for more information.


The second error can be resolved by using the CLI version instead of docker.

pip install readmeai
export OPENAI_API_KEY=YOURAPIKEY
readmeai -o readme-ai.md -r https://github.com/eli64s/readmeai-ui -m gpt-4-1106-preview

Note: Consider using gpt-4 instead of gpt-4-1106-preview as the latter incurs higher costs. After just 2 runs, I incurred $1.6 in usage.

johmicrot commented 10 months ago

While I do have a paid account, i was sill somehow under the "free tier". I pay 20 a month for gpt4, but somehow I just had to put $5 on to my account to get the gpt-3.5-turbo-1106 model to work. The latest update has the requirements.txt version pinned, perhaps that also fixed a previous error I had. Closing the issue.

@Ashu0Singh thanks for the heads up about the usage, even with the gpt-3.5-turbo-1106 model I've quickly used up $ 0.39

Ashu0Singh commented 10 months ago

@johmicrot Yeah, that's because ChatGPT and OpenAI works with two different account. If you have signed up for GPT-4 that doesn't means that you have a tier one account. You'll need to add billing info separately for both of them.

eli64s commented 10 months ago

Thanks for helping out @Ashu0Singh!