Closed 00xPh4ntom closed 1 year ago
Mine is giving this too
same
It's because you don't have a subscribed openAI account. To access the openAI account you need a paid service. Atleast that's what I was prompted when I made a post request to the endpoint from my free account
Do tell me if I am mistaken
It's because you don't have a subscribed openAI account. To access the openAI account you need a paid service. Atleast that's what I was prompted when I made a post request to the endpoint from my free account
oh, right. i didn't realise that my free tokens expire after 1 yr. thanks sam altman.
Yeah it was that. Sorry for that. I never used openai api but for some reason my free token had expired a month ago lol.
I am getting this error, even though I am paying for my OpenAI subscription, and have a brand new API Key. I am able to use ChatGPT4 from their console (chat.openai.com) freely, but all sgpt
commands fail with 429 for me:
vpatov@vpatov-1 ☸︎ default:cloud ~ via v12.22.12 via 🐍 v3.10.12
» sgpt "nginx default config file location" --model="gpt-4"
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/app.py:179 in main │
│ │
│ 176 │ │ │ caching=cache, │
│ 177 │ │ ) │
│ 178 │ else: │
│ ❱ 179 │ │ full_completion = DefaultHandler(role_class).handle( │
│ 180 │ │ │ prompt, │
│ 181 │ │ │ model=model.value, │
│ 182 │ │ │ temperature=temperature, │
│ │
│ ╭─────────────────────────────── locals ────────────────────────────────╮ │
│ │ cache = True │ │
│ │ chat = None │ │
│ │ code = False │ │
│ │ create_role = None │ │
│ │ describe_shell = False │ │
│ │ editor = False │ │
│ │ install_integration = None │ │
│ │ list_chats = None │ │
│ │ list_roles = None │ │
│ │ model = <ModelOptions.GPT4: 'gpt-4'> │ │
│ │ prompt = 'nginx default config file location' │ │
│ │ repl = None │ │
│ │ role = None │ │
│ │ role_class = <sgpt.role.SystemRole object at 0x7f2c9ab10ee0> │ │
│ │ shell = False │ │
│ │ show_chat = None │ │
│ │ show_role = None │ │
│ │ stdin_passed = False │ │
│ │ temperature = 0.1 │ │
│ │ top_probability = 1.0 │ │
│ ╰───────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/handlers/handler.py:30 in handle │
│ │
│ 27 │ def handle(self, prompt: str, **kwargs: Any) -> str: │
│ 28 │ │ messages = self.make_messages(self.make_prompt(prompt)) │
│ 29 │ │ full_completion = "" │
│ ❱ 30 │ │ for word in self.get_completion(messages=messages, **kwargs): │
│ 31 │ │ │ typer.secho(word, fg=self.color, bold=True, nl=False) │
│ 32 │ │ │ full_completion += word │
│ 33 │ │ typer.echo() │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ full_completion = '' │ │
│ │ kwargs = { │ │
│ │ │ 'model': 'gpt-4', │ │
│ │ │ 'temperature': 0.1, │ │
│ │ │ 'top_probability': 1.0, │ │
│ │ │ 'caching': True │ │
│ │ } │ │
│ │ messages = [ │ │
│ │ │ { │ │
│ │ │ │ 'role': 'user', │ │
│ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │
│ │ ShellGPT, a programming and syst'+351 │ │
│ │ │ } │ │
│ │ ] │ │
│ │ prompt = 'nginx default config file location' │ │
│ │ self = <sgpt.handlers.default_handler.DefaultHandler object at 0x7f2c9ab121d0> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/handlers/handler.py:25 in get_completion │
│ │
│ 22 │ │ raise NotImplementedError │
│ 23 │ │
│ 24 │ def get_completion(self, **kwargs: Any) -> Generator[str, None, None]: │
│ ❱ 25 │ │ yield from self.client.get_completion(**kwargs) │
│ 26 │ │
│ 27 │ def handle(self, prompt: str, **kwargs: Any) -> str: │
│ 28 │ │ messages = self.make_messages(self.make_prompt(prompt)) │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ kwargs = { │ │
│ │ │ 'messages': [ │ │
│ │ │ │ { │ │
│ │ │ │ │ 'role': 'user', │ │
│ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App ShellGPT, │ │
│ │ a programming and syst'+351 │ │
│ │ │ │ } │ │
│ │ │ ], │ │
│ │ │ 'model': 'gpt-4', │ │
│ │ │ 'temperature': 0.1, │ │
│ │ │ 'top_probability': 1.0, │ │
│ │ │ 'caching': True │ │
│ │ } │ │
│ │ self = <sgpt.handlers.default_handler.DefaultHandler object at 0x7f2c9ab121d0> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/client.py:92 in get_completion │
│ │
│ 89 │ │ :param caching: Boolean value to enable/disable caching. │
│ 90 │ │ :return: String generated completion. │
│ 91 │ │ """ │
│ ❱ 92 │ │ yield from self._request( │
│ 93 │ │ │ messages, │
│ 94 │ │ │ model, │
│ 95 │ │ │ temperature, │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ caching = True │ │
│ │ messages = [ │ │
│ │ │ { │ │
│ │ │ │ 'role': 'user', │ │
│ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │
│ │ ShellGPT, a programming and syst'+351 │ │
│ │ │ } │ │
│ │ ] │ │
│ │ model = 'gpt-4' │ │
│ │ self = <sgpt.client.OpenAIClient object at 0x7f2c9ab10c70> │ │
│ │ temperature = 0.1 │ │
│ │ top_probability = 1.0 │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/cache.py:39 in wrapper │
│ │
│ 36 │ │ │ │ yield cache_file.read_text() │
│ 37 │ │ │ │ return │
│ 38 │ │ │ result = "" │
│ ❱ 39 │ │ │ for i in func(*args, **kwargs): │
│ 40 │ │ │ │ result += i │
│ 41 │ │ │ │ yield i │
│ 42 │ │ │ cache_file.write_text(result) │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ args = ( │ │
│ │ │ <sgpt.client.OpenAIClient object at 0x7f2c9ab10c70>, │ │
│ │ │ [ │ │
│ │ │ │ { │ │
│ │ │ │ │ 'role': 'user', │ │
│ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │
│ │ ShellGPT, a programming and syst'+351 │ │
│ │ │ │ } │ │
│ │ │ ], │ │
│ │ │ 'gpt-4', │ │
│ │ │ 0.1, │ │
│ │ │ 1.0 │ │
│ │ ) │ │
│ │ cache_file = PosixPath('/tmp/cache/31af70fdb59473dba9f9f05b9041091b') │ │
│ │ cache_key = '31af70fdb59473dba9f9f05b9041091b' │ │
│ │ func = <function OpenAIClient._request at 0x7f2c9a985e10> │ │
│ │ kwargs = {} │ │
│ │ result = '' │ │
│ │ self = <sgpt.cache.Cache object at 0x7f2c9b048d60> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/client.py:59 in _request │
│ │
│ 56 │ │ │ timeout=REQUEST_TIMEOUT, │
│ 57 │ │ │ stream=True, │
│ 58 │ │ ) │
│ ❱ 59 │ │ response.raise_for_status() │
│ 60 │ │ # TODO: Optimise. │
│ 61 │ │ # https://github.com/openai/openai-python/blob/237448dc072a2c062698da3f9f512fae3 │
│ 62 │ │ for line in response.iter_lines(): │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ data = { │ │
│ │ │ 'messages': [ │ │
│ │ │ │ { │ │
│ │ │ │ │ 'role': 'user', │ │
│ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │
│ │ ShellGPT, a programming and syst'+351 │ │
│ │ │ │ } │ │
│ │ │ ], │ │
│ │ │ 'model': 'gpt-4', │ │
│ │ │ 'temperature': 0.1, │ │
│ │ │ 'top_p': 1.0, │ │
│ │ │ 'stream': True │ │
│ │ } │ │
│ │ endpoint = 'https://api.openai.com/v1/chat/completions' │ │
│ │ messages = [ │ │
│ │ │ { │ │
│ │ │ │ 'role': 'user', │ │
│ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │
│ │ ShellGPT, a programming and syst'+351 │ │
│ │ │ } │ │
│ │ ] │ │
│ │ model = 'gpt-4' │ │
│ │ response = <Response [429]> │ │
│ │ self = <sgpt.client.OpenAIClient object at 0x7f2c9ab10c70> │ │
│ │ temperature = 0.1 │ │
│ │ top_probability = 1.0 │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/vpatov/.local/lib/python3.10/site-packages/requests/models.py:1021 in raise_for_status │
│ │
│ 1018 │ │ │ ) │
│ 1019 │ │ │
│ 1020 │ │ if http_error_msg: │
│ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │
│ 1022 │ │
│ 1023 │ def close(self): │
│ 1024 │ │ """Releases the connection back to the pool. Once this method has been │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ http_error_msg = '429 Client Error: Too Many Requests for url: │ │
│ │ https://api.openai.com/v1/chat/comp'+7 │ │
│ │ reason = 'Too Many Requests' │ │
│ │ self = <Response [429]> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/chat/completions
I have also tried omitting the model parameter, the output is the same. I know my key works, because if I unset it, I get 401 instead of 429.
I am getting this error, even though I am paying for my OpenAI subscription, and have a brand new API Key. I am able to use ChatGPT4 from their console (chat.openai.com) freely, but all
sgpt
commands fail with 429 for me:vpatov@vpatov-1 ☸︎ default:cloud ~ via v12.22.12 via 🐍 v3.10.12 » sgpt "nginx default config file location" --model="gpt-4" ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/app.py:179 in main │ │ │ │ 176 │ │ │ caching=cache, │ │ 177 │ │ ) │ │ 178 │ else: │ │ ❱ 179 │ │ full_completion = DefaultHandler(role_class).handle( │ │ 180 │ │ │ prompt, │ │ 181 │ │ │ model=model.value, │ │ 182 │ │ │ temperature=temperature, │ │ │ │ ╭─────────────────────────────── locals ────────────────────────────────╮ │ │ │ cache = True │ │ │ │ chat = None │ │ │ │ code = False │ │ │ │ create_role = None │ │ │ │ describe_shell = False │ │ │ │ editor = False │ │ │ │ install_integration = None │ │ │ │ list_chats = None │ │ │ │ list_roles = None │ │ │ │ model = <ModelOptions.GPT4: 'gpt-4'> │ │ │ │ prompt = 'nginx default config file location' │ │ │ │ repl = None │ │ │ │ role = None │ │ │ │ role_class = <sgpt.role.SystemRole object at 0x7f2c9ab10ee0> │ │ │ │ shell = False │ │ │ │ show_chat = None │ │ │ │ show_role = None │ │ │ │ stdin_passed = False │ │ │ │ temperature = 0.1 │ │ │ │ top_probability = 1.0 │ │ │ ╰───────────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/handlers/handler.py:30 in handle │ │ │ │ 27 │ def handle(self, prompt: str, **kwargs: Any) -> str: │ │ 28 │ │ messages = self.make_messages(self.make_prompt(prompt)) │ │ 29 │ │ full_completion = "" │ │ ❱ 30 │ │ for word in self.get_completion(messages=messages, **kwargs): │ │ 31 │ │ │ typer.secho(word, fg=self.color, bold=True, nl=False) │ │ 32 │ │ │ full_completion += word │ │ 33 │ │ typer.echo() │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ full_completion = '' │ │ │ │ kwargs = { │ │ │ │ │ 'model': 'gpt-4', │ │ │ │ │ 'temperature': 0.1, │ │ │ │ │ 'top_probability': 1.0, │ │ │ │ │ 'caching': True │ │ │ │ } │ │ │ │ messages = [ │ │ │ │ │ { │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │ │ │ ShellGPT, a programming and syst'+351 │ │ │ │ │ } │ │ │ │ ] │ │ │ │ prompt = 'nginx default config file location' │ │ │ │ self = <sgpt.handlers.default_handler.DefaultHandler object at 0x7f2c9ab121d0> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/handlers/handler.py:25 in get_completion │ │ │ │ 22 │ │ raise NotImplementedError │ │ 23 │ │ │ 24 │ def get_completion(self, **kwargs: Any) -> Generator[str, None, None]: │ │ ❱ 25 │ │ yield from self.client.get_completion(**kwargs) │ │ 26 │ │ │ 27 │ def handle(self, prompt: str, **kwargs: Any) -> str: │ │ 28 │ │ messages = self.make_messages(self.make_prompt(prompt)) │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ kwargs = { │ │ │ │ │ 'messages': [ │ │ │ │ │ │ { │ │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App ShellGPT, │ │ │ │ a programming and syst'+351 │ │ │ │ │ │ } │ │ │ │ │ ], │ │ │ │ │ 'model': 'gpt-4', │ │ │ │ │ 'temperature': 0.1, │ │ │ │ │ 'top_probability': 1.0, │ │ │ │ │ 'caching': True │ │ │ │ } │ │ │ │ self = <sgpt.handlers.default_handler.DefaultHandler object at 0x7f2c9ab121d0> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/client.py:92 in get_completion │ │ │ │ 89 │ │ :param caching: Boolean value to enable/disable caching. │ │ 90 │ │ :return: String generated completion. │ │ 91 │ │ """ │ │ ❱ 92 │ │ yield from self._request( │ │ 93 │ │ │ messages, │ │ 94 │ │ │ model, │ │ 95 │ │ │ temperature, │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ caching = True │ │ │ │ messages = [ │ │ │ │ │ { │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │ │ │ ShellGPT, a programming and syst'+351 │ │ │ │ │ } │ │ │ │ ] │ │ │ │ model = 'gpt-4' │ │ │ │ self = <sgpt.client.OpenAIClient object at 0x7f2c9ab10c70> │ │ │ │ temperature = 0.1 │ │ │ │ top_probability = 1.0 │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/cache.py:39 in wrapper │ │ │ │ 36 │ │ │ │ yield cache_file.read_text() │ │ 37 │ │ │ │ return │ │ 38 │ │ │ result = "" │ │ ❱ 39 │ │ │ for i in func(*args, **kwargs): │ │ 40 │ │ │ │ result += i │ │ 41 │ │ │ │ yield i │ │ 42 │ │ │ cache_file.write_text(result) │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ args = ( │ │ │ │ │ <sgpt.client.OpenAIClient object at 0x7f2c9ab10c70>, │ │ │ │ │ [ │ │ │ │ │ │ { │ │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │ │ │ ShellGPT, a programming and syst'+351 │ │ │ │ │ │ } │ │ │ │ │ ], │ │ │ │ │ 'gpt-4', │ │ │ │ │ 0.1, │ │ │ │ │ 1.0 │ │ │ │ ) │ │ │ │ cache_file = PosixPath('/tmp/cache/31af70fdb59473dba9f9f05b9041091b') │ │ │ │ cache_key = '31af70fdb59473dba9f9f05b9041091b' │ │ │ │ func = <function OpenAIClient._request at 0x7f2c9a985e10> │ │ │ │ kwargs = {} │ │ │ │ result = '' │ │ │ │ self = <sgpt.cache.Cache object at 0x7f2c9b048d60> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/vpatov/.local/lib/python3.10/site-packages/sgpt/client.py:59 in _request │ │ │ │ 56 │ │ │ timeout=REQUEST_TIMEOUT, │ │ 57 │ │ │ stream=True, │ │ 58 │ │ ) │ │ ❱ 59 │ │ response.raise_for_status() │ │ 60 │ │ # TODO: Optimise. │ │ 61 │ │ # https://github.com/openai/openai-python/blob/237448dc072a2c062698da3f9f512fae3 │ │ 62 │ │ for line in response.iter_lines(): │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ data = { │ │ │ │ │ 'messages': [ │ │ │ │ │ │ { │ │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │ │ │ ShellGPT, a programming and syst'+351 │ │ │ │ │ │ } │ │ │ │ │ ], │ │ │ │ │ 'model': 'gpt-4', │ │ │ │ │ 'temperature': 0.1, │ │ │ │ │ 'top_p': 1.0, │ │ │ │ │ 'stream': True │ │ │ │ } │ │ │ │ endpoint = 'https://api.openai.com/v1/chat/completions' │ │ │ │ messages = [ │ │ │ │ │ { │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ 'content': '###\nRole name: default\nYou are Command Line App │ │ │ │ ShellGPT, a programming and syst'+351 │ │ │ │ │ } │ │ │ │ ] │ │ │ │ model = 'gpt-4' │ │ │ │ response = <Response [429]> │ │ │ │ self = <sgpt.client.OpenAIClient object at 0x7f2c9ab10c70> │ │ │ │ temperature = 0.1 │ │ │ │ top_probability = 1.0 │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/vpatov/.local/lib/python3.10/site-packages/requests/models.py:1021 in raise_for_status │ │ │ │ 1018 │ │ │ ) │ │ 1019 │ │ │ │ 1020 │ │ if http_error_msg: │ │ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │ │ 1022 │ │ │ 1023 │ def close(self): │ │ 1024 │ │ """Releases the connection back to the pool. Once this method has been │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ http_error_msg = '429 Client Error: Too Many Requests for url: │ │ │ │ https://api.openai.com/v1/chat/comp'+7 │ │ │ │ reason = 'Too Many Requests' │ │ │ │ self = <Response [429]> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/chat/completions
I have also tried omitting the model parameter, the output is the same. I know my key works, because if I unset it, I get 401 instead of 429.
Try printing out the response json and see the error message. It's either because you might have exceeded the no of api calls available in your subscription maybe?
Try printing out the response json and see the error message.
How would I print the response json? The response json is handle by the libary code, and I don't see a verbose option for the shell. Dont really wanna go into the library source and start putting "print"s there...
It's either because you might have exceeded the no of api calls available in your subscription maybe?
Even after this fails, I can still use the Web UI to use chatgpt4 using the same account, so I don't think I've hit my rate limit
I am encountering the same issue
I believe the root cause of this problem lies within the OpenAI APIs rather than the SGPT. I tried the code below but I encountered the same issue: (ps: i have a new token with full credit!)
import os
import openai
openai.api_key = 'sk-****TQVp'
def send_message(message):
response = openai.Completion.create(
engine='gpt-3.5-turbo',
prompt=message,
max_tokens=50,
temperature=0.7,
n=1,
stop=None
)
return response.choices[0].text.strip()
print("chat with chatGPT. Type 'exit' to end the conversation.")
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
break
response = send_message(user_input)
print("ChatGPT: " + response)
I was getting similar errors. Turns out my free credit had expired. I started a paid plan for the API and it works fine now.
I did find learn that access to the API is not included in the ChatGPTplus subscription. You have to set up paid API access separately. Per their pricing page for the API: "Is the ChatGPT API included in the ChatGPT Plus subscription? No, the ChatGPT API and ChatGPT Plus subscription are billed separately. The API has its own pricing, which can be found at https://openai.com/pricing. The ChatGPT Plus subscription covers usage on chat.openai.com only and costs $20/month."
I was getting similar errors. Turns out my free credit had expired. I started a paid plan for the API and it works fine now.
I did find learn that access to the API is not included in the ChatGPTplus subscription. You have to set up paid API access separately. Per their pricing page for the API: "Is the ChatGPT API included in the ChatGPT Plus subscription? No, the ChatGPT API and ChatGPT Plus subscription are billed separately. The API has its own pricing, which can be found at https://openai.com/pricing. The ChatGPT Plus subscription covers usage on chat.openai.com only and costs $20/month."
Thanks
I have met same issue, my key can use in another web app or other GPT helper, just Error in shell-gpt, response 429 or 401
i tried every thing i thought to fix this issue (checking mountly limit rate ,getting new API key, reinstalling sgpt) but i could not fix it. this is the log: