-
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
```python
from typing import Union
from langchai…
-
### Describe the bug
First off, thanks for your support and for the work done with bazzite.
I've been looking through various posts on the Discourse which seem to indicate that:
- There was a `u…
-
### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GG…
-
### What is the issue?
I had ollama compiled from source and it worked fine. Recently I rebuild it to the last version, and it seems to not use my GPU anymore (it uses a lot of CPU processes, and it …
-
TLDR: Add [Ollama](https://ollama.com/) Component to Aspire similar to the [OpenAI](https://learn.microsoft.com/en-us/dotnet/aspire/azureai/azureai-openai-component?tabs=dotnet-cli) component.
## C…
-
Hi @greenido, Thanks for sharing your project!
I wanted to reach out and ask what you think about the idea of using a Docker image that contains both Ollama and the model, instead of using Ollama …
-
### 🐛 Describe the bug
This is the code that I am running.
```python
from mem0 import Memory
config = {
"llm": {
"provider": "ollama",
"config": {
"model": …
-
**Is your feature request related to a problem? Please describe.**
When I run Alpaca, the Ollama instance is under-utilizing my resources, resulting in slower outputs.
**Describe the solution yo…
-
### System Info
NVIDIA GPU A30
nvidia-smi
```
Thu Oct 31 11:43:51 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02 …
-
I have an Electron Webpack/TypeScript project created with Electron Forge.
This is the default `tsconfig.json` in this case: [webpack-typescript/tsconfig.json](https://github.com/electron/forge/blob/…