Closed pacman100 closed 2 months ago
Hello @stas00, @tjruwase, @muellerzr and @BenjaminBossan;
Would be interested in knowing your thoughts.
@pacman100, thanks for asking. DeepSpeed has provided supported for multiple models since our release DeepSpeed-Chat release in April 2023.
DeepSpeed-Chat implementation is available here.
Here is a good entry point for the support named DeepSpeedRLHFEngine.
We would be excited to collaborate on integrating into accelerate.
This is exciting, thank you for finding time to working on this important need, Sourab!
1) I think this one is trivial - stash the engine into the model once you created it.
# inside: model_1, optimizer_1, scheduler_1 = accelerator.prepare(model_1, optimizer_1, scheduler_1)
deepspeed_engine = ...
model_1.deepspeed_engine = deepspeed_engine
now each engine is tied to its model, and you can operate on it from the each model.
If you do this same change for the previously existing functionality it'd still work for a single engine case.
I suppose the only concern here is a circular reference, which might need a manual untangling when the accelerator is destroyed - this is of no concern for normal functionality since this usually implies the end of the program - but it'd could impact tests - we don't want memory leaks.
from_pretrained
to know whether to activate zero.Init
or not it was a different world. I hadn't imagined that there might be more than one deepspeed engine.OK, so perhaps one approach is to redesign completely how transformers
interacts with external engines. But let's first study the updated needs. Besides Deepspeed ZeRO, do you know if FSDP plans on implementing zero.Init
? Are there any other frameworks that need to tap into the model instantiating moment in from_pretrained
? And if none at the moment should we prepare for the future when others will?
Hope to see any update on it!
Hope to see any update on it!
@stas00 any update on this?
Hope to see any update on it!
Is there any update on this?
Any updates?
actively working on this now!
We have a path forward for doing this, here's the basic plan we plan on having as an early and experimental API as part of accelerate 1.0.0
The idea here is if you intend on having the same DS configuration operating across all models, and all models need to step at the same time, then you can just create one accelerator as before with .prepare()
. As a result, when calling accelerator.backward()
it will call the backward for every model that was prepared.
_ = accelerator.prepare(...)
accelerator.backward()
However, if you need interoperability then as part of the DeepSpeedPlugin
, you can give names to each model that we will tag by reference, such that:
plugin = DeepSpeedPlugin(model_to_reference={"teacher": model1, "student": model2}
With this, during the call to accelerator.backward
you can specify which model's backward should be used as a result:
accelerator.backward(loss, ds_model_ref_name="teacher")
Given there can be scenarios where this is not intended, such as when using a reference model for DPO, we intend to have users create a second deepspeed plugin to use here that can then be enabled or disabled (with the first one being passed in the list as the enabled one by default).
E.g.:
accelerator = Accelerator(deepspeed_plugins=[plugin1, plugin2])
From here, you can do:
plugin1.enable()
This will setup any environmental variables needed (such as triggering or un-triggering zero3 init if this configuration doesn't use it), and disabling a plugin that is not the first plugin will automatically re-enable the first plugin.
(And also used for accelerator.prepare
)
This will also be aliased as:
with plugin1:
...
If there are aspects you think which we are missing in terms of multiple-model DeepSpeed, or there is something confusing with the API, do not hesitate to give us some feedback here. It's a very early API that took us quite a while to settle on a decent solution, but we're more than open to if this won't fulfill certain needs of users.
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
Abstract
We are considering supporting multiple models with DeepSpeed when using Accelerate. We will be using the term model and DeepSpeed engine interchangeably.
Motivation and Background
Currently, when using Accelerate's integration of DeepSpeed, only a single model is supported. This limits the use cases such as RLHF, GANs, Knowledge Distillation etc which involve multiple models. We also have interest in this feature as per the below feature requests:
The reasons for restricting to only a single model support is given below:
Proposal
The aim would be to solve the 2 challenges above. This would need:
prepare
method. For example, given 4 models in rlhf scenario, I should be able to do the below:Challenges for which user would need to do extra work:
accelerator.backward(model_1_loss)
oraccelerator.backward(model_2_loss)
is called. Behind the scenes, currentlyself.deepspeed_engine_wrapped.backward(loss, **kwargs)
is called as we currently support only 1 DeepSpeed engine. Now, if we have multiple DeepSpeed engines, how do we know which deepspeed engine’s backward to call? Should akwarg
such asaccelerator.backward(model_1_loss, model=model_1)
be passed and internally have a mapping between the model and the respective DeepSpeed engine? However, passing such akwarg
deviates from the minimal API of Accelerate.zero_init=True
, the user is then tasked with disabling it when loading the models which use ZeRO-2 viawith zero3_init_context_manager(enabled=False)
context manager.Compatibility
This feature needs to be backwards compatible with Accelerate as well as Trainer. The Trainer API will have no changes.
Alternatives Considered
accelerator.prepare()
method.Dependencies
Expected behavior
Enabling usecases involving multiple models with Accelerate's DeepSpeed integration.