Open exnx opened 3 months ago
@exnx What is your DeepSpeed configuration and can you share the stacktrace of the error?
I found the issue, it's looking for layer_01, but that layer has no weights in my model so it's not saved.
So I had to hack the deepspeed library and remove some assertions that check for that layer_01 and instead looks for layer_02. DS looks for that layer_01 so that it can figure out what the model parallel size from that layer.
@exnx, thanks for debugging this issue. Your analysis is correct. The purpose of that assertion is to confirm that existence of at least one layer_*
file if using pipeline parallelism. There is nothing special about layer_01
, it was just a convenient choice for the model used during the development. For example, a more robust (but inefficient) validation would be to check that _get_layer_keys()
is not empty.
I am tryin to use the universal checkpoint conversion code,
python ds_to_universal.py
, but I get this error that can't find a layer number. I'm not sure why, but I am missing layer 01 and 16, my code just skips creating them when saving the checkpoint. Deepspeed ckpt conversion is expecting them, and therefore breaks. Does that sound familiar to anyone? Thanks in advance!I am using GPT Neox codebase, and have Deepspeed 0.14.4 installed.
Error:
Here are the files in my save directory: