-
How to use the llama3.1 with Ollama? Do you support it?
-
After updating ipex-llm, running llama3.1 through langchain and ollama no longer works.
A simple reproducer:
```python
# pip install langchain langchain_community
from langchain_community.llms i…
-
Ensure that Janus can be used with an Ollama endpoint
-
Hola, espero puedas apoyarme, te cuento, estoy instalando un docker desktop usando wsl en windows, descargo imagenes y creo container usando archivos dockerfile y docker-compose.yml, estoy instalando …
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
Following the Quickstart documentation provided [here](https://docs.privategpt.dev/quickstart/gettin…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
在使用Ollama时报错,显示没有llama_index.core.llms.function_calling模块。
代码:
from llama_index.llms.o…
-
Here is the output I get, running with Ollama locally (just the example from the README)
```
Starting orchestrator
Browser started and ready
Executing command play shape of you on youtube
=====…
-
I see `ollama` has a `cuda` and `rocm` version. Ollama [appears to support](https://github.com/ollama/ollama/blob/v0.2.5/gpu/gpu.go#L633) a `oneapi` version now. From some tests it appears this would …
-
Hi!
I'm trying to set up llm-ls via llm.nvim plugin and I'm hitting weird serdes errors. I set the following config:
```
{
backend="openai",
url = "http://192.168.0.61:8080/api",
api_t…
-
https://x.com/avthars/status/1831133832168247513
alwqx updated
3 weeks ago