-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
For CE testing chip ID OCR we want to try ollama + minicpm.
Would like to try to build ollama in Spack. Currently it builds an older version (0.1.31) and with no CUDA support. Latest ollama is 0.…
-
![image](https://github.com/user-attachments/assets/67ca0bad-8798-4aa2-a249-44b379104de8)
# ComfyUI Error Report
## Error Details
- **Node Type:** Joy_caption
- **Exception Type:** ValueError
…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
### Describe the bug
I try loading and make infere…
-
Across different LMMs the max new token is different .
I believe we should have a consistent MAX_NEW_TOKENS across the project, set to 512 or 1024
If it makes sense, I can create a PR to modify al…
-
this model has a vision adapter: mmproj-model-f16.gguf
i never used any vision model in lmstudio, so I don´t know if that is a bug or related to this particular model.
because this model has strong …
-
对minicpm-V2.6模型,我设置了--sft_type full用来进行全量微调,相比于--sft_type lora不经显存使用增加很多,而且速度上也变慢了很多,但是在生成的sft_args.json里面,--sft_type full是没问题的,但是仍然存在"target_modules": "^(llm|resampler)(?!.*(lm_head|output|emb|wte|sh…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [x] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
When the workspace uses a visual model, the system functions normally if the system default visual model is used, but if the s…