issues
search
OptimalScale
/
LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
https://optimalscale.github.io/LMFlow/
Apache License 2.0
8.11k
stars
817
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
The tool-learning domain fientune script developed based on ToolBench
#869
HALIS-sh
closed
2 hours ago
0
RAFT Issues
#868
jujipotle
opened
1 day ago
2
[Feature] reward model inferencer and dpov2 aligner
#867
wheresmyhair
opened
1 day ago
1
[Feature] Reward model inferencer support
#866
wheresmyhair
closed
1 day ago
0
Add customized optimizer support
#865
research4pan
closed
5 days ago
1
Full parameter fine-tuning bugs
#864
tankeui
opened
6 days ago
3
[Feature] Add vllm inference example
#863
wheresmyhair
closed
6 days ago
1
[Roadmap] LMFlow Roadmap
#862
wheresmyhair
opened
6 days ago
1
[BUG] The text cannot be generated successfully during the Raft step
#861
biaoliu-kiritsugu
opened
1 week ago
1
[Feature] vllm inferencer and memory safe vllm inferencer
#860
wheresmyhair
closed
1 week ago
2
[Feature] Iterative DPO support
#859
wheresmyhair
closed
1 week ago
0
demo of DPO with QLoRA (w Llama3 70B Instruct)
#858
anchen1011
opened
1 week ago
1
Usability update
#857
wheresmyhair
closed
1 week ago
0
json load dataset takes for a long time
#856
biaoliu-kiritsugu
closed
1 week ago
1
multi-gpu full para train error
#855
tankeui
opened
2 weeks ago
2
[Feature] Add PPO support
#854
wheresmyhair
closed
1 week ago
0
Add multi node README
#853
research4pan
closed
2 weeks ago
0
[Model] hf model modification and inheritance change
#852
wheresmyhair
closed
2 weeks ago
0
[Feature] PPO Support
#851
wheresmyhair
closed
2 weeks ago
0
[Usability] Add preset lora target modules
#850
wheresmyhair
closed
2 weeks ago
0
[Model Support] Qwen2 update
#849
wheresmyhair
closed
2 weeks ago
0
Add langchain chatbot
#848
YanxinLu
closed
2 weeks ago
0
[Bug fix] Fix tokenizer multiprocessing in reward model
#847
wheresmyhair
closed
3 weeks ago
0
[Bug fix] Blocking function args missing fix
#846
wheresmyhair
closed
3 weeks ago
0
[Bug fix] Tokenization multiprocessing fix
#845
wheresmyhair
closed
3 weeks ago
0
Long context summarize demo
#844
HALIS-sh
closed
3 weeks ago
1
Readme update
#843
wheresmyhair
closed
1 month ago
0
Full parameter fine-tuning cannot be trained
#842
orderer0001
opened
1 month ago
1
Training was successful on a single card 4090GPU, but an error was reported on a 3*4090GPU. why
#841
orderer0001
opened
1 month ago
1
[BUG]when map the dataset, i set the num_proc = 2 or 4, it will make mistakes.
#840
nicosouth
opened
1 month ago
8
Add supported models table
#839
wheresmyhair
closed
1 month ago
0
Add paired conversation dataset description
#838
wheresmyhair
closed
1 month ago
0
Add github stale bot
#837
wheresmyhair
closed
1 month ago
0
Reward modeling support
#836
wheresmyhair
closed
1 month ago
1
Doc dataset page update
#835
wheresmyhair
closed
1 month ago
0
Reward modeling support
#834
wheresmyhair
closed
1 month ago
0
Discussion about LISA
#833
caoshuai03
opened
1 month ago
1
Add finetuning doc
#832
wheresmyhair
closed
1 month ago
0
Weird Loss Curve
#831
Zihang-Xu-2002
opened
1 month ago
1
Guide update
#830
wheresmyhair
closed
1 month ago
0
Customized conversation template guide update
#829
wheresmyhair
closed
1 month ago
0
Add zephyr template
#828
wheresmyhair
closed
1 month ago
0
Reward modeling support
#827
wheresmyhair
closed
1 month ago
0
Yizhenjia template update
#826
wheresmyhair
closed
1 month ago
0
[BUG] Can Lisa be used for chatglm3 with lmflow?
#825
BiJings
opened
1 month ago
1
Support yi and yi1.5 template
#824
wheresmyhair
closed
1 month ago
0
Add 'validation_split_percentage' and 'evaluation_strategy' parameters for Trainers
#823
smirn0v
opened
1 month ago
1
Add chatglm3 template
#822
wheresmyhair
closed
1 month ago
0
Hello, why I fine-tuning Qwen1.5-1.8B-Base and test with CMMLU, the model answer repetition
#821
13416157913
closed
1 month ago
6
Add chatglm3 template
#820
wheresmyhair
closed
1 month ago
0
Next