-
ollama/ollama:latest
-
I want to ask a question that is not very advanced. Please forgive my ignorance.
Can Co-Storm be instantiated using Ollama+Serper?
Because this is not a bug, I did not apply the template. Please f…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrat…
-
i use ollama for flux ,and flux is large,so is there a way to clean gpu used or some method to free ?
-
Hi,
I wanted to give this a try and installed ollama locally. I am able to use the ollama API on http://localhost:11434/api/generate with curl.
I evaluated `export OLLAMA_API_BASE=http://localhost:…
-
In the documentation the Bionic GPT it is mentioned that it works with ollama and OpenAPI compatible backends and it is demonstrated running a local gemma model. I could not find information on how to…
-
-
- Code needs work currently broken. I require more understanding on how Ollama works.
Goal:
1. Web search with RAG.
2. Index results with collections.
3. Grab content from top results.
4. Su…
-
I'm not a specialist. But in the Readme.md, you suggest that ollama/mistral is very slow in docker. So I'd like to use it outside of Docker.
But I see only
`ollama pull mistral`
And not a command…
-
e.g. w/ llama 3.1 8b, you can do some basic tool calling if you're lucky. - this could land in the examples/ folder with all the others, and could be any framework, e.g. langchain, crew, raw, etc.