-
-
-
Qwen2 Deployment by Ollama fail,prompt"Ollama_llama_server无法找到入口"
![题目2](https://github.com/user-attachments/assets/96df807c-2f06-462f-bc43-9c9de533875e)
Test environment:Ultra 5 125H CPU,Win11 23H2…
-
Hello There,
I am running Ollma locally on my Machine in CLI and I have API endpoint running as per below
```
[ollama]
api_base = "http://127.0.0.1:11434/"
```
Also, I have done settings for …
-
Any plan to add local LLMs?
-
# Tokenizer Import Error When Using Ollama Models
## Description
When attempting to use Ollama models (llama3, llama3.1, mistral), the application fails due to a tokenizer import error. The error …
-
I am trying to serve with Ollama on the **Jetson AGX Orin 64 GB** developer kit.
The first example `jetson-containers run --name ollama $(autotag ollama)` works and the server responds on 127.0.0.1…
-
Ollama logs look awesome in Humanlog but can get a few improvements
![image](https://github.com/user-attachments/assets/eb731310-f80d-4df1-b287-8efb046ef410)
Logs attached: [ollama_serve_output…
-
Hi, thanks, that's very interesting.
I'd like to know if you've ever had `Unclosed client session` errors. It's the second time I use mcts functions with open-webui, and each time I have these erro…
-
With the [ollama](https://ollama.com/) project it's easy to host our own AI models.
You can set up bring-your-own-key (BYOK) to connect to ollama server, and see if you can use StarCoder2 for code …