issues
search
okuvshynov
/
slowllama
Finetune llama2-70b and codellama on MacBook Air without quantization
MIT License
448
stars
34
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
use it for some tiny version of RLHF
#17
okuvshynov
opened
7 months ago
0
RuntimeError: The size of tensor a (2560) must match the size of tensor b (5120) at non-singleton dimension 0
#16
aspen01
closed
8 months ago
4
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
#15
aspen01
opened
8 months ago
15
run prepare_model.py error
#14
Barryzhang1
closed
8 months ago
4
use conf instead of magic 4
#13
roryclear
closed
9 months ago
0
/slowllama/logs/prepare_model.log doesnt exist
#12
miladf2
closed
8 months ago
3
finetune.py segmentation fault
#10
QueryType
closed
1 year ago
6
Mojo 🔥?
#9
oaustegard
opened
1 year ago
1
Fine-tune other models
#8
Gincioks
opened
1 year ago
13
slowllama: split each block to attention and feed forward
#7
okuvshynov
closed
1 year ago
0
Update README.md
#5
okuvshynov
closed
1 year ago
0
Fp16
#4
okuvshynov
closed
1 year ago
0
Try dolly
#3
okuvshynov
closed
1 year ago
0
Slow service
#2
okuvshynov
closed
1 year ago
0
Fine-tuning codellama dataset
#1
rajivpoddar
opened
1 year ago
14