-
## ❓ General Questions
After generating the Android MLCChat app based on the model gemma-2-2b-it-q4f16_1 and installing it on my device, I found that the chatbot seems not to retain previous co…
-
It ignores last 3 messages, resulting in reply to previous message I send (I send A get B. I send C get D. I send E get F. Click regenerate message. Get variation of D).
[chat-2-gemma227b-regenerat…
-
“使用本地Ollama模型
如果你使用本地Ollama模型,需要配置环境变量:OLLAMA_ORIGINS=chrome-extension://bciglihaegkdhoogebcdblfhppoilclp,否则访问会出现403错误。
然后在插件配置里,apiKey随便填一个,服务器地址填http://localhost:11434,模型选自定义,然后填入自定义模型名如llama2。
…
-
2024-08-30 16:07:18,677 INFO: HTTP Request: POST http://127.0.0.1:11434/chat/completions "HTTP/1.1 404 Not Found"
2024-08-30 16:07:18,678 ERROR: Error while generating response
Traceback (most recen…
-
Hello, I get the error in the title when finetuning Phi3.5.
I believe I'm on the latest unsloth (installed from git wit pip).
Context: finetuning Phi3.5 with code that already works with other u…
beniz updated
2 weeks ago
-
### How are you running AnythingLLM?
AnythingLLM desktop app
### What happened?
I uploaded a markdown file to the Workspace:
Using Ollama and `gemma2:latest` (max tokens: 4096) I asked:
>…
-
While the Replicate API approach allows you to select which version of flux to run, the local approach defaults to the dev model. Can you make it so that schnell can be run locally? Additionally, the …
-
### What happened?
I am running gemma2:9b and llama3.1:8b, and gemma2 does create titles. But llama3.1 does not.
My current config:
```
# Configuration version (required)
version: 1.1.5
…
-
### What is the issue?
The `HTTP_PROXY` and `HTTPS_PROXY` variables aren't being used when requesting the model manifest file and pulling of the model itself. The symptom that lead me to the extra de…
-
### What is the issue?
I see this issue has been partially reported, but none of the previous reports seem to be extensive in their tests of possible methods to set this option.
The problem:
Ol…