Open barsuna opened 1 month ago
How are you using llama3 with GPT researcher?
there was a post here https://github.com/assafelovic/gpt-researcher/issues/395
i'm using a small api gateway that translates gpt-researcher api calls to llama.cpp own apis and does some other general maintenance of the api outputs. lm-studio + ollama can be done without code changes (discounting my other issues) the api gateway requires other changes to gpt-researcher, so i would not recommend it
When testing gpt-researcher with local llama3, i found that sometimes the extract_headers will throw the exception here
apparently sometimes what comes after <h is not a number
temporarily i've changed it to
but perhaps maintainer of this, can probably fix it properly