ading2210 / vercel-llm-api

A reverse engineered Python API wrapper for the Vercel AI Playground, which provides free access to many large language models without needing an account.
https://pypi.org/project/vercel-llm-api
GNU General Public License v3.0
151 stars 12 forks source link

Request error #12

Open VanshShah1 opened 1 year ago

VanshShah1 commented 1 year ago

Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 141, in stream_request chunk = chunks_queue.get(block=True, timeout=0.01) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/queue.py", line 179, in get raise Empty _queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/Users/vanshshah/Documents/GitHub/neurumtopsecret/experiments/gpt.py", line 2, in ans=nerdapi.ask("I'm thinking about building an civilization filled with AI bots interacting with each other and developing") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/vanshshah/Documents/GitHub/neurumtopsecret/experiments/nerdapi.py", line 16, in ask for chunk in client.generate("openai:gpt-3.5-turbo", f"Your name is n.e.r.d., an AI language model trained by Neurum. {prompt}", params=params): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 185, in generate for chunk in self.stream_request(self.session.post, self.generate_url, headers=headers, json=payload): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 144, in stream_request raise error File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 132, in request_thread response.raise_for_status() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/curl_cffi/requests/cookies.py", line 51, in raise_for_status raise RequestsError(f"HTTP Error {self.status_code}: {self.reason}") curl_cffi.requests.errors.RequestsError: HTTP Error 500:

GidiGumDrop commented 1 year ago

I have the same issue.

0zl commented 1 year ago

+1 for this issue. Always return status 500.

GidiGumDrop commented 1 year ago

Testing in Postman also returned HTTP error 500. They might have changed the API?

sachnun commented 1 year ago

this is hardcode, but you can try.

retry, max_retries = 0, 10
while retry < max_retries:
    try:
        for chunk in client.chat("openai:gpt-3.5-turbo", messages, params=params):
            print(chunk, end="", flush=True)
        print()
        break
    except:
        retry += 1
        if retry == max_retries:
            raise
        logging.warning(f"Retrying {retry}/{max_retries}...")
        continue
INFO:root:Downloading homepage...
INFO:root:Downloading and parsing scripts...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
Internal Server ErrorWARNING:root:Retrying 1/10...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
Internal Server ErrorWARNING:root:Retrying 2/10...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
The 2020 World Series was played at the Globe Life Field in Arlington, Texas.
mak448a commented 11 months ago

@ading2210 You added the help wanted label. Would you like me to make a pull request based on https://github.com/ading2210/vercel-llm-api/issues/12#issuecomment-1671172545?

ading2210 commented 11 months ago

@mak448a Sure, I wouldn't mind a PR for this.

mak448a commented 11 months ago

It's harder than I expected to patch this. It won't stop retrying. I'm going to give up. Sorry

itszerrin commented 11 months ago

this usually works:

from vercel_ai import Client
from curl_cffi.requests.errors import RequestsError

def chat_gen(client: Client, messages: list, model: str = "openai:gpt-3.5-turbo", params: dict = {"temperature": 0.8}) -> str:

    response: str = ""

    try:

        for chunk in client.chat(model, messages, params):

            # just make sure we dont process the returned error lol
            if chunk != 'Internal Server Error':

                response += chunk

        # append the ai's response to the message list
        messages.append({'role': 'assistant', 'content': f'{response}'})

        return response

    # error-driven recursive call
    except RequestsError:

        return chat_gen(client, messages, params=params, model=model)
mak448a commented 11 months ago

@Recentaly Can you make a pull request?

itszerrin commented 11 months ago

I cannot get the error fixed from the source. Only in my own scripts.

Ivang71 commented 10 months ago

Vercel usually gives out the 500 error when making too many requests from the same ip, thic can be fixed by rotating proxies.