-
can ollama URL be configured to point to remote box?
or try use ssh tunnel to make remote ollama appear to be local
-
### System Info
I am experimenting with TRT LLM and `flan-t5` models. My simple goal is to build engines with different configurations and tensor parallelism, then review performance. Have a DGX syst…
-
**Describe the bug**
```
Failed to create agent from provided information:
Agent with name AngryReindeer already exists
```
**Please describe your setup**
- [ ] MemGPT version
0.2.12
-…
-
## Goal
Follow up #81 and make the web editor usable for managing models in local/GitHub folders
## Development items
- #473
- [ ] Remove ScalablyTyped plugin, as it can't import duckdb-wasm and ge…
-
I'd like to run live llava completely locally on Jetson including a web browser.
However, if I turn off wifi before starting live llava, the video won't play on the browser.
If I turn off wifi after…
-
## Description
**Objective:** Integrate **[Ollama](https://ollama.ai/)** into RepoGPT to enable local AI processing using models like Llama 2.
---
## Rationale
- **Enhanced Privacy:** Keep…
-
**Describe the bug**
Hi, all. Working on a blog article, following a mix of local documentation + Intelligent app workshop, but instead of going Falcon, I've gone with the Mistral 7b model. and at …
-
**Is your feature request related to a problem? Please describe.**
The doc refers to Ollama with the mixtral model.
**Describe the solution you'd like**
Update the doc.
**Describe alternativ…
-
This is a really great local llm backend that works on a lot of platforms
(including intel macs) and is basically a 1-click install.
**Main site:** https://ollama.ai/
**API dosc:** https://githu…
-
### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | ti…