Closed MSR2201 closed 5 days ago
ok can you show the code please?
from scrapegraphai.graphs import SmartScraperGraph from scrapegraphai.utils import prettify_exec_info
graph_config = { "llm": { "model": "ollama/gemma2:2b", "temperature": 1, "format": "json", # Ollama needs the format to be specified explicitly "model_tokens": 100, # depending on the model set context length "base_url": "http://localhost:11434", # set ollama URL of the local host (YOU CAN CHANGE IT, if you have a different endpoint }, "embeddings": { "model": "ollama/nomic-embed-text", "temperature": 0, "base_url": "http://localhost:11434", # set ollama URL } }
smart_scraper_graph = SmartScraperGraph( prompt="List me all the projects with their description.",
source="https://perinim.github.io/projects", config=graph_config )
result = smart_scraper_graph.run() print(result)
this is the code which i used
Look at the new examples
Getting the same error ..
Traceback (most recent call last):
File "D:\Five minutes\firecrawl\scrapegraph\app.py", line 4, in
ok what's is your config? can you try to use llama3?
my laptop is a cpu based but it should not be a problem with gemma llama is taking too much space
ok please update
Traceback (most recent call last): File "D:\Five minutes\firecrawl\scrapegraph\app1.py", line 1, in
from scrapegraphai.graphs import SmartScraperGraph
File "D:\Five minutes\firecrawl\scrapegraph\myenv\lib\site-packages\scrapegraphai\graphs__init__.py", line 5, in
from .abstract_graph import AbstractGraph
File "D:\Five minutes\firecrawl\scrapegraph\myenv\lib\site-packages\scrapegraphai\graphs\abstract_graph.py", line 16, in