-
**Describe the bug**
bug appear:llama does not appear to have a file named config.json
llama paremeter is from huggingface
**To Reproduce**
Steps to reproduce the behavior:
1.python3 apply_delt…
-
Não tirei print do erro mas tenho o log dele, era um erro que evitava que a remoção de filmes acontecesse com sucesso
` NoReverseMatch at /remover_filme/
Reverse for 'delete_movie' with keyword ar…
-
/home/yanli/miniconda3/envs/ipex_ww44/lib/python3.9/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Plea…
-
x86平台软路由常用的7922和 ax系列网卡驱动支持,这些网卡在软路由群体中有很大的占比,无线网卡无法驱动是一件让人难受的事情。
感谢辛苦的开发者
-
As discussed on ##linux-surface, hardware buttons on some Surfaces are not working. Reloading the button module works post-boot:
```
sudo modprobe -r soc_button_array
sudo modprobe soc_button_array…
-
I am getting the following error when trying to query from a ConversationalRetrievalChain using HuggingFace.
` ( a ValueError: Error raised by inference API: Model stabilityai/stablelm-tuned-alpha-3…
-
-
Using mistral and llama2 with ollama, I received the following error message: `Error: llama runner exited, you may not have enough available memory to run this model?`.
The `README.md` states that …
-
**Problem:**
I am aware everyone has different results, in my case I am running llama.cpp on a 4090 primary and a 3090 secondary, so both are quite capable cards for llms.
I am getting around 800% s…
-
Now that Flash Attention 2 is natively supported in `transformers` for Llama / Falcon models, I tried to run the `sft_trainer.py` example and am running into various errors (reproduced below). I am in…