issues
search
Locutusque
/
TPU-Alignment
Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free
Apache License 2.0
219
stars
22
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
GPT2 Rules
#14
michaelifebrian
closed
1 week ago
1
Full finetune or LoRA?
#13
staticpunch
closed
5 months ago
1
how to finetune qwen1.5-0.5b model with alpaca type dataset ?
#12
hunt-47
closed
6 months ago
0
Support for LLama like models (Deepseek)
#11
VatsaDev
closed
5 months ago
8
SM3 Optimizer + other memory optimizations
#10
IsNoobgrammer
closed
8 months ago
3
dataset
#9
francqz31
closed
8 months ago
1
Add support Gemma , Smoothed Loss Fn, fix typo , val_steps, Better Wandb Logging
#8
IsNoobgrammer
closed
8 months ago
1
[feature request] Add support for Google Gemma models
#7
windmaple
closed
8 months ago
4
Incorrect implementation for calculating loss
#6
IsNoobgrammer
closed
8 months ago
2
Phi-architechture + other features
#5
IsNoobgrammer
closed
8 months ago
6
Axolotl
#4
fakerybakery
closed
9 months ago
2
License
#3
fakerybakery
closed
9 months ago
2
Padding Token?
#2
IsNoobgrammer
closed
9 months ago
3
unsupported model to partitioning
#1
windmaple
closed
9 months ago
4