mishushakov / llm-scraper

Turn any webpage into structured data using LLMs
MIT License
1.78k stars 126 forks source link

Hitting input token limit on local language models #22

Open Ademsk1 opened 2 months ago

Ademsk1 commented 2 months ago

When scraping fairly large websites, we hit the token limit and receive the GGML_ASSERT error:

 n_tokens_all <= cparams.n_batch

For smaller websites this isn't an issue.

We should think about decomposing the website into chunks if it hits a certain length threshold, summarising each chunk using the local language model, and then stitch together these summaries coherently using the model once more.

Another thought I've had is to take screenshots instead using playwright, and get some text recognition in there. Or perhaps even better, if there is a playwright method to only extract the text content, and leave the html entirely.

DraconPern commented 1 month ago

The example https://news.ycombinator.com actually runs into this. I get a GGML_ASSERT: D:\a\node-llama-cpp\node-llama-cpp\llama\llama.cpp\llama.cpp:11163: n_tokens_all <= cparams.n_batch error

Ademsk1 commented 1 month ago

We can try and use the Accessibility feature on playwright https://playwright.dev/docs/accessibility-testing This would extract all the text. Could be a good start to reduce the HTML size. @mishushakov

siquick commented 1 month ago

Also getting this on GPT-4-Turbo on some web pages. Only seems to hit the context length when mode: "html" but I find that mode: "text" isn't as accurate.