-
## 🐛 Bug
## To Reproduce
Steps to reproduce the behavior:
1.
Download prebuild android app
2. Download Gemma 2b.
3. Start chating.
MLCChat failed
Stack trace:
org.apache.tvm.Base$TVMError: Int…
-
Opening a new issue (see https://github.com/ollama/ollama/pull/2195) to track support for integrated GPUs. I have a AMD 5800U CPU with integrated graphics. As far as i did research ROCR lately does su…
-
### Feature request
Currently, `AutoTokenizer.from_pretrained` only accepts the `use_fast` parameter from the keyword arguments.
However, it would be beneficial if we could set the default value for…
-
Thanks for your code!
I am going through your code, and the part dealing with data format handling in the finetune.py file is a bit confusing to me. However, due to certain reasons, I am currently …
-
#### Describe the bug
After loading the model (gemma2:latest) in Alpaca on openSUSE Tumbleweed, attempting to use the chat feature results in a connection error. The local Ollama instance is reset, b…
-
### What is the issue?
When running ollama on Windows, attempt to run 'ollama pull llama3.1' results in 'ollama pull llama3.1
pulling manifest
Error: Incorrect function.'
### OS
Windows
### GPU
…
-
When I use Ollama with Llama 3.1 8B, the agent just keeps looping and listing out it's functionality.
```
I'm an Autonomous JSON AI Task-Solving Agent. I can memorize, forget, and delete memories.…
-
I run the IIAB on ubuntu24 native vs run IIAB on ubuntu24 in VM running on ubuntu20.
there is a big difference in download speed. 500KB vs 3.5MB.
I'm guessing that my 2015 laptop has some old dr…
-
### Describe the issue
![image](https://github.com/microsoft/graphrag/assets/140797957/5271d351-a6a4-430c-a574-a299e37e8cff)
### Steps to reproduce
_No response_
### GraphRAG Config Used
_No re…
-
Hi, I would like to get a yaml file for embedding in OCP, I only see examples of inference, but no yaml file for embedding.
Regards.