-
- [ ] [classifiers/README.md at main · blockentropy/classifiers](https://github.com/blockentropy/classifiers/blob/main/README.md?plain=1)
# classifiers/README.md
## Fast Classifiers for Prompt Rout…
-
- [ ] [Strings](https://lispcookbook.github.io/cl-cookbook/strings.html)
# Strings
**DESCRIPTION:** "The Common Lisp Cookbook – Strings
📢 New videos: web dev demo part 1, dynamic page with HTMX, W…
-
Traceback (most recent call last):
File "/root/miniconda/envs/thj/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda/…
-
### System Info
```shell
Optimum-habana main branch, at the commit 8863f1cc2be695a59673fc8a8095e25101a45f3f
SW hl-1.15.0
```
### Information
- [ ] The official example scripts
- [X] My own modifi…
-
### Summary
```
[INFO] LlamaEdge-RAG version: 0.11.1
[INFO] Model names: aya-23-8B-Q5_K_M
[INFO] Model aliases: default,embedding
[INFO] Context sizes: 16384
[INFO] Batch sizes: 512,512
Error: …
-
This is my trial for corpus training in unsloth. model load is the same as the example of unsloth code.
![image](https://github.com/unslothai/unsloth/assets/97992669/be530784-428c-4c23-9637-d1d94c5…
-
Just tried `Gemma` model but not sure why it performed that bad.
```sh
ollama pull gemma:2b
ollama run gemma:2b
```
**So is it the model issue or the model hosted here is different?**
For …
-
Please let us know what model architectures you would like to be added!
**Up to date todo list below. Please feel free to contribute any model, a PR without device mapping, ISQ, etc. will still be …
-
Reopening issue about `gemma-7b` prediction values.
This issue is still not solved: The perplexity values of gemma-2b and gemma-7b (much worse, near random) are very different. Wikitext-v2 token pe…
-
Hi there,
I just installed ollama 0.1.27 and tried to run gemma:2b but it suggest CUDA out of memory error. Could you please investigate and figure out root cause?
I'm using CPU `i7-4700HQ` wi…