-
Using #64 as reference I was able to run `LLama3.2-vision` however the output seems completely unrelated. I have not modified the prompt in anyway and am passing a multipage pdf.
Code:
```python…
-
Thanks for your great work , I would like to ask is there an option that we can access it using some how a terminal or how can i choose llama3.2:1b instead of using llama3.2
-
should have a port to upload image
-
-
Use the same prompt used in the other tests, with the following LLM and embeddings model.
Add the final analysis to the comparison table available in ??
LLM
* https://ollama.com/library/llama3.2:3b-…
-
### What is the issue?
if i run llama3.1 (which is ok):
```
Prompt: What is three plus one?
Calling function: add_two_numbers
Arguments: {'a': 3, 'b': 1}
Function output: 4
```
but if i run …
fce2 updated
17 hours ago
-
-
Is there any expectation for compatibility with the newly released LLAMA3.2? As a developer I could help with the project?
-
**Is your feature request related to a problem? Please describe.**
Llama3.2 was released, and as it has multimodal support would be great to have it in LocalAI
**Describe the solution you'd li…
-
Is there support for llama3.2 with TensorRT-LLM? I tried engine build but got a rope error? Maybe it is related to the context length? Thanks.