-
I am already there "https://llama.meta.com/llama-downloads" I applied for a unique custom URL, but the third download still prompted "403: Forbidden". Why? Has anyone encountered the same problem befo…
-
I can't get CrewAI to use any custom tools whatsoever for some reason, as it even when it appears to invoke the tool (and doesn't time out) I get an error stating the tool isn't found, or alternativel…
-
### Bug Description
The example code demonstrating how to use SchemaLLMPathExtractor produces an AssertionError when run.
Specifically - it fails the check to see if the kg_schema produced in indi…
-
First, the same test worked fine on iphone 15/14, so it's very likely an issue on iphone 13
Test set up:
llama ExecuTorch ios app, built locally
iphone **13 pro**, ios version **17.4**
llama…
-
### Feature Description
Being able to use Llama-3.2-11B-Vision-Instruct model with the llama_index.multi_modal_llms.huggingface and being able to set up a streamchat instance with it.
### Reason
_…
-
### Proposal
Add Llama 3.1 support. Currently trying to load it fails with:
`ValueError: meta-llama/Meta-Llama-3.1-8B-Instruct not found. Valid official model names (excl aliases): `
### Mot…
-
Thank you so much for your hard work. I feel extremely close to getting this working. The llama model loads and the pdf processing is working great. But when I ask a question, it returns a "localhost:…
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
I encountered an OOM error when trying to DPO MiniCPM-LLaMA-v-2.5 with my own dataset and 4 r…
-
### What is the issue?
Hardware has 11.1 GiB (RAM) + 1.9 GiB (GPU) = 13 GiB, but fails to run a 3B model.
Any idea why?
```
Nov 14 17:49:49 fedora ollama[1197]: r14 0x6
Nov 14 17:49:49 fedor…
-
when I run
!bash scripts/merge_lora.sh
after
Loading LLama3.2-Vision from base model...
Loading LoRA weights...
Traceback (most recent call last):
raise ValueError(
ValueError: Targe…