-
### OS
Windows
### GPU Library
CUDA 12.x
### Python version
3.11
### Pytorch version
2.4.1+cu121
### Model
google/gemma-2-27b-it
### Describe the bug
starting approxim…
-
I'm having an issue where I have ollama and llama2 downloaded but I'm getting nowhere with the AI. It gives me the entire conversation spiel but then I try to talk to it and it just gives me an error.…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
vllm == 0.5.5
FlashInfer==0.1.6+cu121torch2.4
### 🐛 Descri…
wlwqq updated
2 months ago
-
Installed Flux-Magic as described with no errors. I use Ollama local service with Gemma2 model and ComfyUI local service. In UI when prompt anything and press "Magic!" button, this happens:
D:\Flux…
-
Hi Jason
I upgraded from 0.1 to 0.1.3 yesterday and haven't been able to get plock to work. I removed the old settings.json file and it regenerated a new settings which has the new format (process, p…
-
# Fix for gemma-2-9b - run with blfloat16
![image](https://github.com/ObrienlabsDev/machine-learning/assets/24765473/4e149bf2-e84e-48a8-b3bc-1939d1543f66)
https://huggingface.co/google/gemma…
-
Dear authors,
Great work, thanks for sharing.
I am trying to fine-tune bge-reranker-v2-gemma using my own dataset.
However, according to the officail finetuning command provided:
```bash
…
-
## 🐛 Bug
I am trying to work with Jiutian 13.9b MoE model.But getting error in model compilation step.
## To Reproduce
Steps to reproduce the behavior:
1.
pip install --pre -U -f https://…
-
之前试过internlm2.5 7b,效果不错,20b估计更好,20b量化到int4-int8,显存在10G-20G这个范围,家用显卡能跑。
qwen2一直没出10b~32b这个范围的模型,可以用这个替代。
-
when trying to run models downloaded not from gpt4all the application crashes. The required models were downloaded to the required folder