-
Hello, we encountered some issues while reproducing the test results in the paper. On the AlpacaEval 2.0, we noticed that your GitHub page stated that you followed the default settings and chose **'al…
-
First, thank you for your work and efforts in maintaining this framework.
I would like to change the dependency on `alpaca-trade-api` to `alpaca-py` as the former library has been officially deprec…
-
We're running an ISLE site behind a reverse proxy, and when we save a media resource, it generates a lot of error message[0] when invoking a sub-service. When I dig into any of the subservices, it oft…
-
When I finetune llama7b:
```
# alpaca
torchrun --nproc_per_node=8 --master_port=29000 train.py \
--model_name_or_path .cache/hub/models--meta-llama--Llama-2-7b-hf/snapshots/01c7f73d771dfac7d…
-
i am saved merged model by using below code
model1 = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.float16) # or torch_dtype="auto"
when I am using it facing error
code:
…
-
**Describe the bug**
A clear and concise description of what the bug is.
System: Fedora 40, KDE spin, 16 GB internal memory, 12 × Intel® Core™ i5-10400 CPU @ 2.90G, NVIDIA GTX1650 super.
The fo…
-
Below is the code I am using but this generated output with input and instructions.
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
…
-
I just came across this package. Looks very useful, thanks (especially since I'm currently working on a problem where the IPP is likely an issue).
Two quick questions:
1. Do you plan to submit t…
-
Hello, How should I set the decoding parameters (e.g., temperature) for Gemma-2? My result is about ~50.0, far from the benchmark of 76.
-
Nemo Mistral error : there was an error with the local ollama instance
(base) ferran@z590i:~$ flatpak run com.jeffser.Alpaca
çF: Not sharing "/usr/share" with sandbox: Path "/usr…