Open josharian opened 4 months ago
Zero3 should handle frozen modules. per https://github.com/microsoft/DeepSpeed/pull/2653/files. Are we perhaps freezing/unfreezing too late after deepspeed has wrapped the model?
Zero3 should handle frozen modules.
I think the trouble is that range freezing relies on having shape information available, and once deepspeed has wrapped the model, that shape information is unavailable.
Are we perhaps freezing/unfreezing too late after deepspeed has wrapped the model?
That sounds plausible. (Might that also mean that deepspeed isn't as effective as it could be at memory usage?)
@winglian Got the same problem with a stage 1 config. Unfreezing an entire layer doesn't work (no gradient).
No problem with fsdp
Please check that this issue hasn't been reported before.
Expected Behavior
Set up a config like:
Train.
Expect something like:
Got:
This leads to things...not working as intended.
https://github.com/OpenAccess-AI-Collective/axolotl/pull/1686 will make diagnosis/recognition of this easier. But it doesn't fix the root problem.
AFAICT, the root problem is that deepspeed/zero3.json changes model loading such that the parameters no longer have their original shapes, like this:
As a result, when range end is None, it gets set to 0.
(It also appears that this may mess with model saving as well. My saved models with deepspeed/zero3.json are way too small, possible because they have shape
torch.Size([0])
for almost all layers.)Current behaviour
see above
Steps to reproduce
see above
Config yaml
No response
Possible solution
No response
Which Operating Systems are you using?
Python Version
3.11
axolotl branch-commit
whatever docker image has (how do i get this from the docker image?)
Acknowledgements