-
I would like to request support for SearXNG. Since the SearX repo is outdated and no longer maintained, I have switched to using SearXNG for all my projects and have a server running on it. However, w…
-
Hello world in the QuickStart.
OllamaLLMUnit implementation.
1. Ok where does my Ollama server URL go?
The page about the Ollama implementation in the docs does not mention url whatsoever. Clie…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
```
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-5.19.0-0_fbk12_zion_11583_g0bef9520ca2…
-
I encountered the following problem while running a local search from Google Colab:
command:
```console
python -m graphrag.query --root ./ragtest --method local \
"What are the differences betwe…
-
Hi developers,I am using gemma.cpp to inference Gemma2-2b-pt model,the Model related things are downloaded from [Kaggle](https://www.kaggle.com/models/google/gemma-2/gemmaCpp)
I am using the late…
-
macos 14.6.1 M1
import os
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
os.environ["ACCELERATE_USE_MPS_DEVICE"] = "True"
from PIL import Image
from transformers import AutoModelForCausal…
-
Authentication in code with token=hf_token doesn't work unless you use subprocess.run("local-gemma", "--token", hf_token, "What is the capital of France")
`model = LocalGemma2ForCausalLM.from_pretr…
-
...zodat ik ook in de toekomst de duurzame toegankelijk van zaken kan waarborgen
[20181216_Voorstel_GEMMA_Architectuur_Duurzame_Toegankelijkheid.pdf](https://github.com/VNG-Realisatie/gemma-zaken/fil…
hdksi updated
7 months ago
-
Hi everyone:
To use Groq, just add in **models.py**:
```
url = 'https://api.groq.com/openai/v1/chat/completions'
groq = dict(type=GPTAPI,
model_type='gemma2-9b-it',
k…
-
Hello, I get the error in the title when finetuning Phi3.5.
I believe I'm on the latest unsloth (installed from git wit pip).
Context: finetuning Phi3.5 with code that already works with other u…
beniz updated
1 month ago