-
**AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.**
**Describe the bug**
Please provide me with the text from which you'd like …
-
### What is the issue?
Try this in ollama Gemma 2 9B or 27B, it just never stops.
Give a succinct summary of the entire email conversation in not more than 40 words,
Emails To Andrew Fastow:
…
-
### ⚠️ Please check that this feature request hasn't been suggested before.
- [X] I searched previous [Ideas in Discussions](https://github.com/OpenAccess-AI-Collective/axolotl/discussions/categories…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
/kind bug
**Describe the solution you'd like**
Current huggingfaceserver requirements [set in pyproject toml](https://github.com/kserve/kserve/blob/master/python/huggingfaceserver/pyproject.toml#L…
-
### What is the issue?
Similar (?) to #1952. I've been noticing that ollama will crash when using long context lengths on ROCm. In particular, the most noticeable thing is that I can continue large c…
-
Hi @danielhanchen , I tried training a gemma2 9b model today, but I ran into an error within llama.cpp when the model was converting from bf16 to f16. The problem arose due to the changes made in the …
-
Hello,
I'm encountering an Out Of Memory (OOM) error while trying to proceed with Gemma using the parameters below. I'm working in an environment with 8 A100 GPUs (80GB each).
```bash
model_pat…
-
I see this in the readme
`Supports EXL2, GPTQ and FP16 models`
but no links to the models themselves?
Can you give me the HF URLs for those recommended models? Or the models you think are "best" f…
-
Gemma2 has released. Is it supported ?