Closed JSchmie closed 3 weeks ago
please update to the new examples
they are legacy
look at this link https://github.com/ScrapeGraphAI/Scrapegraph-ai/tree/main/examples
look at this link https://github.com/ScrapeGraphAI/Scrapegraph-ai/tree/main/examples
I've looked into the examples, and I noticed that in this example and other examples related to Ollama, the context window is set using model_tokens
. However, in the example for simple web scraping, the context window isn’t modified at all.
I really like your project, but without being able to increase the context window to make full use of the model, I won’t be able to use this framework effectively. Could you please provide a short code snippet or guidance on changing the context length in the latest version?
ok, can you specify the context_window inside? like this graph_config = { "llm": { "model": "ollama/mistral", "temperature": 1, "format": "json", 'model_tokens': 128000, "base_url": ollama_base_url }
Btw which model of mistral are you using? These are the available models https://ollama.com/library/mistral
ok, can you specify the context_window inside? like this graph_config = { "llm": { "model": "ollama/mistral", "temperature": 1, "format": "json", 'model_tokens': 128000, "base_url": ollama_base_url }
As you can see from my example, I followed this procedure. I attempted to execute it without embedding for debugging purposes, however, the identical error persists. I am using ollama version 0.3.14.
Btw which model of mistral are you using? These are the available models https://ollama.com/library/mistral
I just use the latest model of mistral, but I also tried llama3.1:8b and 70b which has a context length of 128k and also the gemma2:9b.
@JSchmie I fixed this in #773. The model_tokens
dictionary key was only available with model instances before this, but now it's accessible for all models.
The pull request will be merged into the development branch (pre/beta
) first, so a few days will pass before the fix will be available in a stable release.
@JSchmie I fixed this in #773. The
model_tokens
dictionary key was only available with model instances before this, but now it's accessible for all models.The pull request will be merged into the development branch (
pre/beta
) first, so a few days will pass before the fix will be available in a stable release.
I have installed your branch using.
pip install --force-reinstall git+https://github.com/ScrapeGraphAI/Scrapegraph-ai.git@768-fix-model-tokens
But unfortunately, I cannot confirm that it works. I still get the error:
Token indices sequence length is longer than the specified maximum sequence length for this model (11148 > 1024). Running this sequence through the model will result in indexing errors
I can confirm that self.model_token
is set to 128000. Furthermore, I also tried to set llm_params["num_ctx"] = self.model_token
here since ChatOllama
also uses num_ctx
to set the context window (see documentation here) but it still does not work.
hi please update to the new beta
:tada: This issue has been resolved in version 1.27.0-beta.13 :tada:
The release is available on:
v1.27.0-beta.13
Your semantic-release bot :package::rocket:
@VinciGit00 I tried that adjustment, and while the error persists:
Token indices sequence length is longer than the specified maximum sequence length for this model (11148 > 1024). Running this sequence through the model will result in indexing errors
the results are looking significantly better now! Could it be that this error is being thrown unintentionally?
@JSchmie the error is coming from LangChain and not from ScrapeGraphAI. Using ollama/mistral
will call Mistral 7B, which has a context window of 1024 tokens.
Yes, but the error still occurs when I am using:
graph_config = {
"llm": {
"model": "ollama/llama3.1:8b",
"temperature": 1,
"format": "json",
'model_tokens' : 128000,
"base_url": ollama_base_url
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"base_url": ollama_base_url
},
}
Update:
I believe I found a crucial issue, which may stem from Ollama itself. In their API documentation, they note:
Important:
Whenformat
is set tojson
, the output will always be a well-formed JSON object. It’s essential to also instruct the model to respond in JSON.
Until now, I wasn't aware of this limitation. If the model doesn’t respond in JSON, it outputs a series of newline characters. Given that inputs can sometimes be quite large, the model might ignore the instruction to respond in JSON, potentially leading to significant quality discrepancies.
Interestingly, when using LangChain directly, this issue doesn’t occur, and the context length is applied correctly. I’ve included the code below, which may be helpful for debugging.
import requests
from bs4 import BeautifulSoup
from langchain_ollama import ChatOllama
# Define the URL to fetch content from
url = "https://github.com/ScrapeGraphAI/Scrapegraph-ai"
# Send a GET request to fetch the raw HTML content from the URL
response = requests.get(url)
response.raise_for_status() # Raise an exception if an HTTP error occurs
# Parse the HTML content with BeautifulSoup
soup = BeautifulSoup(response.content, "html.parser")
# Extract and clean up text content from HTML, removing tags and adding line breaks
text_content = soup.get_text(separator='\n', strip=True)
# Create a prompt to ask the language model (LLM) what the website is about
# JSON format is explicitly requested in the prompt
promt = f"""
USE JSON!!!
What is this website about?
{text_content}
"""
# Initialize the language model with specific configurations
llm = ChatOllama(
base_url='http://localhost:11434', # Specify the base URL for the LLM server
model='llama3.1:8b', # Define the model to use
num_ctx=128000, # Set the maximum context length for the LLM
format='json' # Request JSON output format from the LLM
)
# Invoke the LLM with the prompt and print its response
print(llm.invoke(promt))
The output looks like this:
AIMessage(content='{ "type": "json", "result": { "website": "scrapegraphai.com", "library_name": "ScrapeGraphAI", "description": "A Python library for scraping leveraging large language models.", "license": "MIT license" } }\n\n \n\n\n\n\n\n \n\n\n\n ', additional_kwargs={}, response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-10-29T13:35:28.025049704Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 5262412590, 'load_duration': 4070193548, 'prompt_eval_count': 2328, 'prompt_eval_duration': 385116000, 'eval_count': 61, 'eval_duration': 761894000}, id='run-34027abd-c2ea-433e-8eb5-3bb57b5e97a2-0', usage_metadata={'input_tokens': 2328, 'output_tokens': 61, 'total_tokens': 2389})
:tada: This issue has been resolved in version 1.28.0-beta.1 :tada:
The release is available on:
v1.28.0-beta.1
Your semantic-release bot :package::rocket:
:tada: This issue has been resolved in version 1.28.0 :tada:
The release is available on:
v1.28.0
Your semantic-release bot :package::rocket:
Describe the bug The
model_tokens
parameter in thegraph_config
dictionary is not being applied to the Ollama model within theSmartScraperGraph
setup. Despite settingmodel_tokens
to 128000, the output still shows an error indicating that the token sequence length exceeds the model's limit (2231 > 1024
), causing indexing errors.To Reproduce Steps to reproduce the behavior:
SmartScraperGraph
using the code below.graph_config
dictionary, specifyingmodel_tokens: 128000
under the"llm"
section.smart_scraper_graph.run()
.Expected behavior The
model_tokens
parameter should be applied to Ollama's model, ensuring that the model respects the 128000-token length specified without raising indexing errors.Code
Error Message
Desktop:
Additional context: Ollama typically uses the num_ctx parameter to set context length. It seems that model_tokens does not directly influence the model's context length, suggesting a possible oversight or misconfiguration in how the SmartScraperGraph handles token length parameters with Ollama models.
Thank you for taking the time to look into this issue! I appreciate any guidance or suggestions you can provide to help resolve this problem. Your assistance means a lot, and I'm looking forward to any insights you might have on how to apply the
model_tokens
parameter correctly with Ollama. Thanks again for your help!