-
I would like to use Ollama as llm provider, however from a remote cloud provider. I was unable to find where do I put the URL for this. Please provide me with some instructions what to modify!
-
Ollama current doesn't support [Open AI Compatible Function Calling](https://github.com/ollama/ollama/issues/2790) but there are models such as [Hermes 2 Pro](https://huggingface.co/NousResearch/Herme…
-
system minimum configuration
- RAM 16G
- hard drive space at least 20G
ollama download
(You need to download and run the bad llama!!)
```c
//choose one model
The Bad llama here
Large…
-
Hi Jeff,
your page-summarizer extension looks very cool and handy, so I was wondering if you'd consider making the API endpoint configurable.
Background:
[Ollama](https://ollama.com) is a proje…
-
Hey,
the readme mentions streaming without a rate limit using the SentenceBySentence method. Unfortunately, the ratelimit apparently still triggers for me.
Only I am currently testing the bot with a…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
This is a "living issue". Editing is appreciated.
### Context:
- Most prominent benchmark for embedding models: https://huggingface.co/spaces/mteb/leaderboard
- We can choose to index the pdf dat…
-
Can ollama be integrated?
-
Mixed models like llama3 + llava are capable of doing superior things, such as recognizing an image of a screenshot and reconstructing it in html-type programming for example if required.. iT would be…
-
Aider version:
Python version: 3.11.0
Platform: Windows-10-10.0.22621-SP0
Python implementation: CPython
Virtual environment: Yes
OS: Windows 10 (64bit)
Git version: git version 2.42.0.windows.…