-
**Describe the bug**
I just followed the instruction and ran the example and getting the following error
I have verified ollama is running and I can use the models using ollama-webui
I am usin…
-
Hi,
Context:
- I'm experimenting with both `Ellma` and `gptel` for the moment, as I find nice stuff in both.
- Until recently, both `gptel` ans `Ellama` where working correctly
- Sometimes I ne…
-
gemma 2b在个人电脑上能流畅运行,可以为Langchain-Chatchat增加更多的可能性
-
I loaded the model and run the inference just , I found mem and norm_term are large and round 2nd is inf .
"""
[Update] self.norm_term 7444.0 1181.0
[Update] self.memory 11912.0 -11120.0
[Update] …
-
### Your current environment
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04 L…
-
## Generating after lora training CAN NOT Stop Properly
The code at `lora/data/wikisql.py` removes the bos_token and eos_token, assuming the tokenizer will add them automatically. However, this is …
-
ollama version is 0.1.27
Here's the example provided in the documentation.
> ollama run llama2 "Summarize this file: $(cat README.md)"
Here's what I tried use the windows versions and the res…
-
https://huggingface.co/lmstudio-ai/gemma-2b-it-GGUF/tree/main
I noticed that the Gemma models are also in Gguf format,
would it be possible to support the loading of Gemma models?
Thanks! :)
-
If you forget to set `--from_safetensors True` when downloading models, you get:
```
ValueError: Expected 'checkpoints/google/gemma-2b' to contain .bin files
```
We assume that users perfectly…
-
### System Info
safetensors v0.4.2
huggingface_hub v0.22.0.dev0
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
We recently switched …