Closed flefevre closed 2 months ago
yes definitely. Integrating with more models is always good. We have provided some examples of integration with many language model and services. Litellm is definitely a good source. Due to the bandwidth, it may not be our highest priority. Any contribution / PR on this part would be appreciated, and we are happy to merge.
Hi, you can actually use Ollama by running the script run_storm_wiki_ollama.py
found under the same examples
directory. However, as per the latest release you might want to first install storm as a package in your virtual env using pip install knowledge_storm
. Then you need to update some import
lines in the run_storm_wiki_ollama.py
script:
Replace:
from lm import OllamaClient
from rm import YouRM, BingSearch
from storm_wiki.engine import STORMWikiRunnerArguments, STORMWikiRunner, STORMWikiLMConfigs
from utils import load_api_key
with:
from knowledge_storm.lm import OllamaClient
from knowledge_storm.rm import YouRM, BingSearch
from knowledge_storm import STORMWikiRunnerArguments, STORMWikiRunner, STORMWikiLMConfigs
from knowledge_storm.utils import load_api_key
After that I tried with Google's gemma2:latest
model served by Ollama from localhost
(I run it on a Silicon M3) and it worked fine. The command is pretty much the same, but you will have to specify your Ollama --url, --port and --model, and an --output-dir directory as with the gpt script:
python examples/run_storm_wiki_ollama.py \
--output-dir "./output" \
--retriever you \
--do-research \
--do-generate-outline \
--do-generate-article \
--do-polish-article \
--url localhost \
--port 11434 \
--model gemma2:latest
Ollama example has been updated by #145. close this issue as resolved.
Dear all I would like to know if we could connect storm not to open ai but to an open source layer like ollama or vllm or better to a llm proxy such as litellm ? Thanks again