issues
search
okuvshynov
/
slowllama
Finetune llama2-70b and codellama on MacBook Air without quantization
MIT License
431
stars
33
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
use it for some tiny version of RLHF
#17
okuvshynov
opened
3 months ago
0
RuntimeError: The size of tensor a (2560) must match the size of tensor b (5120) at non-singleton dimension 0
#16
aspen01
closed
3 months ago
4
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
#15
aspen01
opened
3 months ago
15
run prepare_model.py error
#14
Barryzhang1
closed
3 months ago
4
use conf instead of magic 4
#13
roryclear
closed
4 months ago
0
/slowllama/logs/prepare_model.log doesnt exist
#12
miladf2
closed
3 months ago
3
finetune.py segmentation fault
#10
QueryType
closed
8 months ago
6
Mojo 🔥?
#9
oaustegard
opened
8 months ago
1
Fine-tune other models
#8
Gincioks
opened
8 months ago
13
slowllama: split each block to attention and feed forward
#7
okuvshynov
closed
8 months ago
0
Update README.md
#5
okuvshynov
closed
8 months ago
0
Fp16
#4
okuvshynov
closed
8 months ago
0
Try dolly
#3
okuvshynov
closed
9 months ago
0
Slow service
#2
okuvshynov
closed
9 months ago
0
Fine-tuning codellama dataset
#1
rajivpoddar
opened
9 months ago
14