-
Hi,
I am experiencing the follow issue, I tried the following versions:
https://www.kaggle.com/models/keras/gemma/frameworks/Keras/variations/gemma_2b_en/versions/1
https://www.kaggle.com/models…
-
Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. If possible, please provide a minimal code example that reproduces the bu…
-
### System Info
@SunMarc
429 Client Error: Too Many Requests for url: [https://api-inference.huggingface.co/models](https://api-inference.huggingface.co/models/smart-panda314/dummy)
### Who …
-
### System Info
> peft version: 0.9.0
> accelerate version: 0.27.2
> transformers version: 4.37.0
> trl version: 0.7.12.dev0
> base model: openai-community/gpt2
> hardware: 2xA100
I'm doing a…
-
**Describe the bug**
I saved a checkpoint "writer_gemma_2b_it-S51.easy". Could you please guide me how can I continue training from this checkpoint?
In addition, should I add save_optimizer_state=Tr…
-
### System Info
- `transformers` version: 4.40.1
- Platform: Linux-5.14.0-362.24.1.el9_3.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- Huggingface_hub version: 0.21.4
- Safetensors v…
-
_Datos de entrada_
/Oración/
_Respuesta:_
The correct sentence would be as follows: /Oración/
Errors:
1.
2.
3.
Como opción, devolver la respuesta en json (como lo hace LangTools).
==============
-…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
use the build gemma.cpp to run 2b, it's ok . but 7b seems abnoraml. something wrong?
./gemma --tokenizer tokenizer.spm --compressed_weights 7b-pt-sfp.sbs --model 7b-it
__ _ ___ _ __ ___ _…
cagev updated
4 months ago
-
claude:
在 claude.ai 官网中登陆,浏览器 cookies 中取出 sessionKey 的值, 但是我尝试了claude到所有模型,都没法使用。
我通过docker部署和linux_server --port 8080部署,结果都是一样,展示v1/models
```
{
"data": [
{
"id": "bi…