issues
search
alibaba
/
ChatLearn
A flexible and efficient training framework for large-scale alignment tasks
Apache License 2.0
216
stars
17
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
release temporary memory for hep.
#163
charles9304
closed
10 hours ago
0
Support multiple vllm generation replicas for VLLMModuleV2
#162
SeaOfOcean
closed
15 hours ago
0
[UT] Add UT for HEP + vLLM when TP size eq
#161
haolin-nju
closed
16 hours ago
0
Fix dictionary changed size during iteration in parameter sync
#160
SeaOfOcean
closed
1 day ago
0
[Feature] dpo场景 为什么 onlinedpo效果不如dpo,rlhf会有下降?
#159
yiyepiaoling0715
closed
1 day ago
1
[Feature] moe框架的模型为何需要单独支持?
#158
yiyepiaoling0715
closed
1 day ago
1
support unbanlanced pp for vllm.
#157
charles9304
closed
5 days ago
0
Add VLLMModuleV2 and call vllm high-level API
#156
SeaOfOcean
closed
6 days ago
0
[BUG] Could Not use APEX and TransformerEngine when use convert_hf_to_megatron
#155
Chlience
closed
2 days ago
2
Delete unused env.
#154
adoda
closed
1 week ago
0
support param sync validation for QWen-MoE.
#153
charles9304
closed
1 week ago
0
[feature] Support HEP and vLLM when EP size neq
#152
haolin-nju
closed
1 day ago
0
fix attr error and add ut of balanced tp.
#151
charles9304
closed
1 week ago
0
[fix] Add stats on the lm_loss and dpo_loss for online_dpo
#150
haolin-nju
closed
1 week ago
0
bump version to 1.1.0
#149
SeaOfOcean
closed
6 days ago
0
[BUG] import error for importlib.util and unable to use Megatron-LM
#148
Chlience
closed
1 week ago
3
compatible with ray >= 2.38.0, not support with ray < 2.11.0.
#147
adoda
closed
2 weeks ago
0
support qwen2-moe.
#146
charles9304
closed
2 weeks ago
2
fix attr error and remove assert error for non-MoE qwen model.
#145
charles9304
closed
3 weeks ago
1
fix attribute error when sync megatron and vllm
#144
SeaOfOcean
closed
3 weeks ago
0
[QUESTION] why do we need a `dpo_train_step`?
#143
loofahcus
closed
3 weeks ago
1
support pp for vllm0.6.3
#142
charles9304
closed
3 weeks ago
0
upgrade to vllm0.6.3 but support tp only.
#141
charles9304
closed
4 weeks ago
0
EMS compatible with transformer_engine v1.10.
#140
adoda
closed
3 weeks ago
0
[BUG] Fix NaN error for validating param sync
#139
haolin-nju
closed
4 weeks ago
2
Support mcts model flow
#138
SeaOfOcean
closed
2 weeks ago
0
run unit test on current change
#137
SeaOfOcean
closed
1 month ago
0
[feature] Add parameter sync for hyper expert parallel
#136
haolin-nju
closed
3 weeks ago
0
format faq (for debug)
#135
SeaOfOcean
closed
1 month ago
0
bypass docs change UT
#134
SeaOfOcean
closed
1 month ago
0
[Feature] Separate requirements.txt for different use cases
#133
kevin85421
opened
1 month ago
0
Add FAQ for non-contiguous tensor error when we enable pp
#132
haolin-nju
closed
1 month ago
0
[BUG] Assertion Error in generate_tokens_probs_and_return_on_first_stage when pp > 1
#131
loofahcus
closed
1 month ago
2
[chore] Print a message if `VLLMPolicyInference` is not available
#130
kevin85421
closed
1 month ago
0
[chore] rename CHATLARN_LOG_ACTOR to CHATLEARN_LOG_ACTOR
#129
kevin85421
closed
1 month ago
0
[chore] Add `accelerate` to requirements.txt
#128
kevin85421
closed
1 month ago
4
Add FAQ link in installation.md for Megatron ckpt convertion issue
#127
haolin-nju
closed
1 month ago
0
add unbalanced param_sync example.
#126
charles9304
closed
4 weeks ago
0
upgrade to vllm 0.6.1 but support tp only.
#125
charles9304
closed
4 weeks ago
0
Add feature list.
#124
adoda
closed
1 month ago
0
refactor param synchronize
#123
SeaOfOcean
closed
2 weeks ago
1
Validate parameter sync during training
#122
SeaOfOcean
closed
1 month ago
0
refine regression test to avoid OOM
#121
SeaOfOcean
closed
1 month ago
0
Hotfix: update parameters that are reordered/concat in vllm parameter sync
#120
SeaOfOcean
closed
1 month ago
0
Fix megatron-megatron sync param_name mapping when TP size differs
#119
SeaOfOcean
closed
1 month ago
0
fix param sync with pipeline
#118
SeaOfOcean
closed
1 month ago
0
Fix: update parameters that are reordered/concat in vllm parameter sync
#117
SeaOfOcean
closed
1 month ago
0
Improve parameter sync speed when TP=1
#116
SeaOfOcean
closed
1 month ago
0
Fix: avoid creating empty FlatTensors
#115
haolin-nju
closed
1 month ago
0
add uts for param sync when tp size not equal.
#114
charles9304
closed
1 month ago
1
Next