issues
search
OptimalScale
/
LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
https://optimalscale.github.io/LMFlow/
Apache License 2.0
8.11k
stars
819
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Hello, why I fine-tuning Qwen1.5-1.8B-Base and test with CMMLU, the model answer repetition
#821
13416157913
closed
1 month ago
6
Add chatglm3 template
#820
wheresmyhair
closed
1 month ago
0
Yizhenjia template update
#819
wheresmyhair
closed
1 month ago
0
add lisa-diffusion project
#818
shaoshitong
closed
1 month ago
0
Change conversation template file structure
#817
wheresmyhair
closed
1 month ago
0
Fine-Tuning Crashes for no reason when Eight GPU cards are used.
#816
OscarC9912
opened
1 month ago
4
[BUG] did not output the eval results at all.
#815
xigua314
opened
1 month ago
3
Add DeepSeek template and template register
#814
wheresmyhair
closed
1 month ago
0
Hello,How to go on fine-tuning with checkpoint?
#813
13416157913
closed
1 month ago
2
Hello,How to go on fine-tuning with checkpoint?
#812
13416157913
closed
1 month ago
0
Hello,How to go on fine-tuning with checkpoint?
#811
13416157913
closed
1 month ago
0
DeepSeek conversation template support
#810
wheresmyhair
closed
1 month ago
0
Fix eval_dataset number log.
#809
uApiv
closed
1 month ago
0
README hindi update
#808
wheresmyhair
closed
1 month ago
0
README es update
#807
wheresmyhair
closed
1 month ago
0
Weird Loss with LISA
#806
harry7171
opened
1 month ago
1
README jp update
#805
wheresmyhair
closed
1 month ago
0
README kr update
#804
wheresmyhair
closed
1 month ago
0
README zh update
#803
wheresmyhair
closed
1 month ago
0
Merge LoRA and base model
#802
wheresmyhair
closed
1 month ago
0
Out Of Memory Issue LISA
#801
harry7171
closed
1 month ago
4
README update, remove lora save aggregate shell
#800
wheresmyhair
closed
2 months ago
0
Hello,Where is the script run_finetune_with_lora_save_aggregated_weights.sh?Why I can't find it in LMFlow/scripts ?
#799
13416157913
closed
2 months ago
2
LMFlow not support NVIDIA driver 11070?
#798
13416157913
closed
2 months ago
2
Add DPO support
#797
wheresmyhair
closed
2 months ago
0
Hello , Can LMFlow support Qwen1.5-1.8B model Fine-tuning?
#796
13416157913
closed
2 months ago
3
ValueError: mutable default <class 'lmflow.utils.conversation_formatter.StringFormatter'> for field user_formatter is not allowed: use default_factory
#795
13416157913
closed
2 months ago
8
README update, adding conversation template examples
#794
wheresmyhair
closed
2 months ago
0
README update
#793
wheresmyhair
closed
2 months ago
0
Remove lora qlora aggregated shell
#792
wheresmyhair
closed
2 months ago
0
Remove lora qlora aggregated shell
#791
wheresmyhair
closed
2 months ago
0
Finetune shell typo fix
#790
wheresmyhair
closed
2 months ago
0
Add trust_remote_code option to finetune shells
#789
wheresmyhair
closed
2 months ago
0
Add phi3 conversation template support
#788
wheresmyhair
closed
2 months ago
0
Fixes & updates on lora, qlora scripts and hf_decoder_model
#787
wheresmyhair
closed
2 months ago
0
Causal LM finetuning
#786
harry7171
opened
2 months ago
3
Does it support llama3?
#785
orderer0001
opened
2 months ago
5
Update examples for Full-param SFT and LISA
#784
research4pan
closed
2 months ago
0
Custom conversation template improvement and document update
#783
wheresmyhair
closed
2 months ago
0
Contrib README typo fix
#782
wheresmyhair
closed
2 months ago
0
add chatml conversation template
#781
wheresmyhair
closed
2 months ago
0
Add contributor support
#780
research4pan
closed
2 months ago
0
Remove redundant statements in `setup.py`
#779
research4pan
closed
2 months ago
0
Memory problem of Lisa finetuning
#778
lovekdl
opened
2 months ago
5
[New Feature] Could someone share the finetuned diffusion model which is good at 256x256 resolution?
#777
Pang-0093
opened
2 months ago
0
run llama 3 with lisa
#776
wheresmyhair
closed
2 months ago
0
How to set learning rate decay in lisa fine-tuning
#775
orderer0001
closed
2 months ago
2
About using multiple GPUs to do lisa fine-tuning
#774
orderer0001
opened
2 months ago
4
add support for llama-3 template
#773
wheresmyhair
closed
2 months ago
0
template info update
#772
wheresmyhair
closed
2 months ago
0
Previous
Next