-
Functionary is currently part of the [container](https://github.com/MorpheusAIs/moragents/blob/main/submodules/moragents_dockers/agents/model_config.py)
Llama 3.1 is on host via Ollama.
Let's re…
-
This occurs when using two GPUs, but it does not occur when I use just the one.
I made sure to update to the docker image used in the dockerfile.
commit: a702c6dd2944aaf75800b11f4dfeec6fe5a9b068…
-
Is there any expectation for compatibility with the newly released LLAMA3.2? As a developer I could help with the project?
-
**Is your feature request related to a problem? Please describe.**
Llama3.2 was released, and as it has multimodal support would be great to have it in LocalAI
**Describe the solution you'd li…
-
C:\Users\razvan\Downloads\mindcraft-main\mindcraft-main>node main.js
file:///C:/Users/razvan/Downloads/mindcraft-main/mindcraft-main/settings.js:8
"profiles": [
^^^^^^^^^^
SyntaxError: U…
-
## Goal
- llama3.1
- with Function Calling
-
Hi, the app is missing the latest llama version.
https://ollama.com/library
-
@tstescoTT reports that running continuous batching in a loop leads to hangs during prefill, usually on seqlen=256 and 512.
-
Hi,
How to set model to llama3.1:8b for Local Rag ?
I cant find a convenient way to do this
-
https://huggingface.co/blog/llama31#inference-memory-requirements
Please tell me about the calculation of inference memory requirements for Llama 3.1 in this post.
The table below shows an excerpt…