xtekky / gpt4free

The official gpt4free repository | various collection of powerful language models
https://g4f.ai
GNU General Public License v3.0
60.05k stars 13.22k forks source link

Error while fetching the file: 404 #2231

Open prasantpoudel opened 2 days ago

prasantpoudel commented 2 days ago

when i run the model with a big prompt and content then i got the response as Error while fetching the file: 404 then the result.

from g4f.client import Client

client = Client()

content = []
content.append({
"type": "text",
"text": f" text: <website>{website_content}</website> <instruction>{combine_prompt}</instruction>",
})
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": content}],
    max_tokens=2048,
)
print(response.choices[0].message.content)
kqlio67 commented 2 days ago

Hey there! I see you're having trouble processing large amounts of text. Here are some suggestions that might help:

  1. Try reducing the content size. Some APIs have limits on the amount of text they can process in a single request.
  2. If reducing the size isn't an option, you can split the text into smaller parts and process them separately. Here's a modified version of your code that does this:
from g4f.client import Client
import textwrap

def split_content(text, max_length=3000):
    return textwrap.wrap(text, max_length, break_long_words=False, replace_whitespace=False)

def process_content(client, model, website_content, combine_prompt, max_tokens=2048):
    full_content = f"<website>{website_content}</website> <instruction>{combine_prompt}</instruction>"
    chunks = split_content(full_content)

    all_responses = []

    for i, chunk in enumerate(chunks):
        content = [{"type": "text", "text": chunk}]

        try:
            response = client.chat.completions.create(
                model=model,
                messages=[{"role": "user", "content": content}],
                max_tokens=max_tokens,
            )
            all_responses.append(response.choices[0].message.content)
            print(f"Processed chunk {i+1}/{len(chunks)}")
        except Exception as e:
            print(f"Error processing chunk {i+1}: {e}")

    return "\n".join(all_responses)

# Usage
client = Client()
model = "gpt-4o-mini"

website_content = "Your long website content goes here..."
combine_prompt = "Your instruction or prompt goes here..."

result = process_content(client, model, website_content, combine_prompt)
print("Final result:")
print(result)

This script breaks your content into smaller chunks and processes them separately. It should help avoid errors related to content size.

Currently, in the gpt4free project, there are few providers that support a large context. Most providers have limitations on the amount of text they can process at once. However, this situation may change in the future as more providers are added or existing ones are updated to handle larger inputs. In the meantime, the code example I provided offers a workaround to process large amounts of text with the current limitations.

Remember to adjust the max_length in split_content and max_tokens in process_content based on the specific limitations of the model and provider you're using.