-
**Is your feature request related to a problem? Please describe.**
Yes, this feature request is related to a problem (maybe specific to my use case). I can't load custom models from Alpaca.
**Desc…
-
### What is the issue?
Hello,
I downloaded the Q4KM model from https://huggingface.co/LiteLLMs/French-Alpaca-Llama3-8B-Instruct-v1.0-GGUF/tree/main/Q4_K_M
renamed locally to French-Alpaca-Llama3-8B…
-
Hi there,
I am getting very unsatisfying results from the alpaca.7B model compared to the Alpaca-LoRa model. I am giving alpaca.7B the following prompt, but get nothing useful out of it. However, A…
-
### System Info
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GC…
-
I am not understanding the conceptual usefulness of masking out the prompt.
I have seen that there is a comment in scripts/prepare_alpaca.py that says:
`mask_inputs: bool = False, # as in alpac…
-
## Paste the link of the GitHub organisation below and submit
https://github.com/alpacahq
---
###### Please subscribe to this thread to get notified when a new repository is created
-
After loading all the annotation chunk, it popped the ValueError
![image](https://github.com/tatsu-lab/alpaca_eval/assets/109973290/e22a5b01-11ca-4405-8413-e10778d85c25)
WHAT should I do?
-
Thanks for your wonderful work! I had a problem when fine-tuning the model.
https://github.com/ZrrSkywalker/LLaMA-Adapter/blob/5f1b37e0e2f3ab2e423ea71234c89829fa271ad7/alpaca_finetuning_v1/llama/mode…
-
I'm trying to use alpaca_finetuning_v1/llama to autoregressively generate text for validation during finetuning, however, in alpaca_finetuning_v1/llama/generation.py line 42: logits = self.…
-
**Is your feature request related to a problem? Please describe.**
I already have a lot of GiBs of LLMs downloaded on my PC. My connection is metered & slow, storage is limited as well.
I can't …