-
### What is the issue?
I downloaded the codegemma and codellama models from Huggingface and fine tuned them using llama factory. After importing the fine tuned model into Ollama, Codellama works norm…
-
### Describe the bug
I installed text generation webui and downloaded the model(TheBloke_Yarn-Mistral-7B-128k-AWQ) and I can't run it. I chose Transofmer as Model loader. I tried installing autoawq b…
-
As the title states, do we need to set the model loader to ExLlamav2_HF or ExLlamav2?
The [documentation](https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab) says:
`…
-
# Description
When attempting to set up llama cpp python for GPU support using CUDA toolkit, following the documented steps, the initialization of the llama-cpp model fails with an access violation…
-
### What is the issue?
The crash happens while processing the _png_ image via minicpm-v:latest (1862d7d5fee50b69f6e3007ec999145ab38f17688251495f87669eb81e9dd97c) model. It occurs only on specific _pn…
-
Hello,
i have got a issue while runnning ollama on the A380 gpu.
This log snippet is from the ollama log while executing a prompt from open-webui.
The system runs:
Fedora 39
Kernel 6.10.7
…
-
# Exception running langchain4j example JlamaAiFunctionCallingExamples with JLama version 0.7
Exception in thread "main" java.lang.ClassCastException: class [Lcom.github.tjake.jlama.safetenso…
-
Testing different models, mainly gemma 2, i have been receiving a lot of blank responses (no line, no spacing, just blank no characters at all), usually a few regens fixes it but sometimes it takes qu…
-
大佬,为啥我用Kolors文生图一直报错呀。我使用了使用 '/' 拆分目录。也是按文件结构放的文件。 报错显示:Storydiffusion Model Loader,'NoneType'对象没有属性'enable model cpu offload。
-
### What is the issue?
qwen4b works fine, all other models larger than 4b are gibberish
```
time=2024-09-05T11:35:49.569+08:00 level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 1…
cyear updated
2 weeks ago