issues
search
mikeybellissimo
/
LoRA-MPT
A repo for finetuning MPT using LoRA. It is currently configured to work with the Alpaca dataset from Stanford but can easily be adapted to use another.
Apache License 2.0
18
stars
7
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Evaluation
#9
benam2
opened
1 year ago
3
Finetuning on openassis
#8
benam2
opened
1 year ago
3
lora model size is always ~400bytes
#7
erlakshmi123
opened
1 year ago
3
train/train_loss is always 0.0
#6
lorabit110
opened
1 year ago
4
Does it actually save the LoRA weights?
#5
lorabit110
closed
1 year ago
1
Target modules [Wqkv] not found in the base model.
#4
madaracelio
opened
1 year ago
4
Problem about generate
#3
jianchaoji
opened
1 year ago
11
Story
#2
wheel-is
opened
1 year ago
1
Unsupervised training possible?
#1
leoplusx
opened
1 year ago
1