-
```
What steps will reproduce the problem?
1. /usr/libexec/airportd en1 sniff 1
2. at the same time $ pyrit -r /tmp/airportSniff4XMZYE.cap -o PYRT1.cap
stripLive
3.
What is the expected output? What…
-
This happens to me in "Phi-3-mini-4k-instruct-q4f32_1-MLC-1k", "gemma-2b-it-q4f32_1-MLC-1k",
After updating my GPU drivers to:
> Intel(R) UHD Graphics 630
>
> Driver version: 31.0.101.2115
…
-
When I run the llm_inference in localhost, it's ok to access model file like "gemma-2b-it-gpu-int4.bin" that is in project folder, but when I run llm_inference in Firebase Hosting, it can not access o…
-
**Describe the bug**
Building Java API and use the generated artifacts in another application however I got the below error while using the sample `SimpleGenAI` class.
```
Exception in thread "m…
-
**The use_error_term flag, prevents Feature Ablation**
Feature Ablation, while use_error_term = True, is possible with GPT2 but not with Gemma-2.
**Code example**
```{python}
from sae_lens i…
gboxo updated
2 weeks ago
-
hi team,
I wanted to fine-tuning full parameters of gemma model, I noticed there is an example, https://github.com/Lightning-AI/litgpt/blob/main/litgpt/finetune/full.py , can I use this example for…
-
see #27
https://ai.google.dev/gemma/docs?hl=en
https://www.kaggle.com/models/google/gemma
Gemma on Vertex AI Model garden
https://console.cloud.google.com/vertex-ai/publishers/google/model-gard…
-
Hi team, I checked the locallama and found that gemma can work well with the Self-Extend method. It would be awesome if this technique could be added to the gemma.cpp.
References:
- [locallama](http…
-
**Add Chat Completion API**
Here is docs [chat completion ]( https://huggingface.co/docs/api-inference/tasks/chat-completion)
`from huggingface_hub import InferenceClient
client = Inference…
-
This model outputs complete noise? I can't get it to do anything useful. Is there a reason to include it at all? Is there any use case where it would be useful? Or is there a bug?
![image](https://…