issues
search
tloen
/
alpaca-lora
Instruct-tune LLaMA on consumer hardware
Apache License 2.0
18.66k
stars
2.22k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
RuntimeError: "normal_kernel_cpu" not implemented for 'Char'
#631
Parkourer10
closed
1 month ago
0
benchmark: add benchmark for alpaca lora
#630
zeroorhero
opened
3 months ago
0
train: add benchmark metrics
#629
zeroorhero
opened
3 months ago
0
InvalidHeaderDeserialization
#628
liuting20
opened
4 months ago
0
Why this error? ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded: base_model.model.model.layers.3, base_model.model.model.layers.4, base_model.model.model.layers.5, base_model.model.model.layers.6, base_model.model.model.layers.7, base_model.model.model.layers.8, base_model.model.model.layers.9, base_model.model.model.layers.10, base_model.model.model.la
#627
hzbhh
opened
6 months ago
0
Single GPU vs multiple GPUs stack (parallel)
#626
fdm-git
opened
8 months ago
0
decapoda-research/llama-7b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
#625
Aekansh-Ak
opened
8 months ago
2
Finetune scenarios
#624
Aekansh-Ak
opened
8 months ago
0
0223 webfix
#623
niu769584641
closed
9 months ago
0
tag: llama2-7b-hf
#622
niu769584641
closed
9 months ago
0
failed to run on colab: ModulesToSaveWrapper has no attribute `embed_tokens`
#621
Vostredamus
opened
9 months ago
0
Is there a way to check if this training is all done?
#620
OfficerChul
opened
9 months ago
0
Is it possible to combine alpaca-lora with RAG
#619
HelloWorldLTY
opened
9 months ago
0
Fix the IndexError in get_response
#618
RenzeLou
opened
9 months ago
0
Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported
#617
AngelMisaelPelayo
opened
9 months ago
0
TensorRT - TensorRT - Complete RoPE & repeat_KV implementation.
#616
HengJayWang
opened
10 months ago
0
LAION Open Assistant data is already released
#615
johnnynoone
opened
10 months ago
0
Fix Dockerfile Missing Scipy Installation
#612
OliverGrace
opened
11 months ago
0
The weights are not updated
#611
randomx207
opened
11 months ago
1
generate error after hit submit btn
#610
Minimindy
opened
12 months ago
0
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
#609
mj2688
opened
12 months ago
15
Errors of tuning on 70B LLAMA 2, does alpaca-lora support 70B llama 2 tuning work?
#608
bqflab
opened
1 year ago
0
is there any flag to mark the model is safetensors or pickle format?
#607
Tasarinan
opened
1 year ago
0
When I set load_in_8bit=true, some errors occurred....
#606
hychaochao
opened
1 year ago
0
AttributeError: module 'gradio' has no attribute 'inputs'
#605
Wenzhi-Ding
opened
1 year ago
19
Load_in_8bit causing issues: Out of memory error with 44Gb VRAM in my GPU or device_map error
#604
Nimisha-Pabbichetty
opened
1 year ago
1
can't load tokenizer
#603
Guo-Chenxu
opened
1 year ago
2
generate error
#602
hychaochao
closed
1 year ago
1
Are the saved models (either adapter_model.bin or pytorch_model.bin) only 25-26MB in size?
#601
LAB-703
opened
1 year ago
5
Error when loading lora weights
#600
kelsey-um
closed
1 year ago
2
CUDA out of memory : I am using Colab T4 GPU
#599
anshumansinha16
opened
1 year ago
2
decapoda-research/llama-7b-hf no longer accessible
#598
mabreyes
opened
1 year ago
5
Fix saved_pretrained saves empty adapter for new PEFT version. - Update finetune.py
#597
Marvinmw
closed
7 months ago
0
load_dataset error with Kaggle environment
#596
TrieuLe0801
opened
1 year ago
0
Cannot backpropagate on the loss
#595
ajsanjoaquin
opened
1 year ago
1
May I ask if this project supports internlm
#594
venxzw
opened
1 year ago
0
No output when running generate.py
#593
vifi2021
closed
1 year ago
0
Fix incompatibility with newer transformers versions
#591
almogtavor
opened
1 year ago
0
RuntimeError: shape '[32, 2, 64, 4096]' is invalid for input of size 26214400
#590
yourtiger
closed
1 year ago
1
Possible bugs when using generate_response for batched inference
#589
binhmed2lab
opened
1 year ago
0
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1033, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '<'.
#588
jacolamanna
opened
1 year ago
1
Please update the template for Llama-2 chat completion
#587
binhmed2lab
opened
1 year ago
1
i want to know how to not to use the wandb tool at the finetune.py
#586
BBaekdabang
closed
1 year ago
0
Fine-tune argument resume_from_checkpoint starts from scratch instead of from checkpoint
#585
prpercival
opened
1 year ago
1
GPU UTIL fluctuates wildly
#584
ssocean
opened
1 year ago
1
How to control the save path of downloaded files
#583
lanyunzhu99
closed
1 year ago
0
All adapter_model.bin is the same
#581
paulthewineguy
opened
1 year ago
2
Unable to determine this model’s pipeline type. Check the docs -- Huggingface Inference
#579
JonathanBechtel
opened
1 year ago
1
lora for text classification
#578
paulthewineguy
opened
1 year ago
0
[Question] about pipeline for fine-tuning with conversational question answering dataset.
#577
phamkhactu
opened
1 year ago
0
Next