-
-
How to use the llama3.1 with Ollama? Do you support it?
-
**Describe the bug**
I am trying to use Meta 1 and 2 which require inference support.
I am getting this error: `Unsupported model us.meta.llama3-1-70b-instruct-v1:0, please use models API to get…
-
## Goal
- [ ] Support llama3.1 in main TensorRT-LLM engine formats
- [ ] Upload to HF
## User Requests
-
hope can use llama3.1 soon on ollama
-
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
I am using beropic with llama3.1 for topic modelling. My text is long, so I use doc_…
-
- Ollama: https://github.com/ollama/ollama/blob/main/docs/api.md#chat-request-with-tools
### Prompt
```
"please act as a dictionary, including the pronunciation, explanation, two examples of se…
-
https://huggingface.co/blog/llama31#inference-memory-requirements
Please tell me about the calculation of inference memory requirements for Llama 3.1 in this post.
The table below shows an excerpt…
-
When attempting to download "llama3.1" via the download new model UI, I'm getting:
**It looks like "llama3.1" is not the right name.**
This error does not happen for the other llama models.
jedld updated
1 month ago
-
Here is the output I get, running with Ollama locally (just the example from the README)
```
Starting orchestrator
Browser started and ready
Executing command play shape of you on youtube
=====…