Open prasantpoudel opened 2 days ago
Hey there! I see you're having trouble processing large amounts of text. Here are some suggestions that might help:
from g4f.client import Client
import textwrap
def split_content(text, max_length=3000):
return textwrap.wrap(text, max_length, break_long_words=False, replace_whitespace=False)
def process_content(client, model, website_content, combine_prompt, max_tokens=2048):
full_content = f"<website>{website_content}</website> <instruction>{combine_prompt}</instruction>"
chunks = split_content(full_content)
all_responses = []
for i, chunk in enumerate(chunks):
content = [{"type": "text", "text": chunk}]
try:
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": content}],
max_tokens=max_tokens,
)
all_responses.append(response.choices[0].message.content)
print(f"Processed chunk {i+1}/{len(chunks)}")
except Exception as e:
print(f"Error processing chunk {i+1}: {e}")
return "\n".join(all_responses)
# Usage
client = Client()
model = "gpt-4o-mini"
website_content = "Your long website content goes here..."
combine_prompt = "Your instruction or prompt goes here..."
result = process_content(client, model, website_content, combine_prompt)
print("Final result:")
print(result)
This script breaks your content into smaller chunks and processes them separately. It should help avoid errors related to content size.
Currently, in the gpt4free project, there are few providers that support a large context. Most providers have limitations on the amount of text they can process at once. However, this situation may change in the future as more providers are added or existing ones are updated to handle larger inputs. In the meantime, the code example I provided offers a workaround to process large amounts of text with the current limitations.
Remember to adjust the max_length
in split_content
and max_tokens
in process_content
based on the specific limitations of the model and provider you're using.
when i run the model with a big prompt and content then i got the response as Error while fetching the file: 404 then the result.
from g4f.client import Client