Closed catalpaaa closed 1 year ago
btw theres 64gb swap but looks like its not used.
Also it looks like whenever the model is splite into mutilple bins then stuff just freeze. To me, deepspeed just doesnt wanna use the swap, it takes up all the buff/cache then dose nothing.
Also it looks like whenever the model is splite into mutilple bins then stuff just freeze. To me, deepspeed just doesnt wanna use the swap, it takes up all the buff/cache then dose nothing.
Have you found a way to force deepspeed to use swap?
Describe the bug
LLaMa wont load when using deepspeed, it will just get stucked on
and taking up all the ram before freezing the system.
Is there an existing issue for this?
Reproduction
fellow the tutorial for enable deepspeed, using llama 7b hf.
Screenshot
No response
Logs
System Info