qiqiApink / MotionGPT

The official PyTorch implementation of the paper "MotionGPT: Finetuned LLMs are General-Purpose Motion Generators"
https://qiqiapink.github.io/MotionGPT/
189 stars 11 forks source link

Out of memory on M1 mac #2

Closed NickAnastasoff closed 1 year ago

NickAnastasoff commented 1 year ago

Hi, I had to hack some code in to get this far, but now when I run the demo (without the render flag) I get this error.

 (motiongpt) nickanastasoff@Nicks-Air motiongpt % python generate_motion.py --prompt "Generate a sequence of motion tokens matching the following human motion description." --input "a person walks forward." --lora_path ./checkpoints/pretrained_lora/pretrained.pth --out_dir {output_dir}
loading checkpoint from ./checkpoints/pretrained_vqvae/t2m.pth
Loading model ...
Traceback (most recent call last):
  File "/Users/nickanastasoff/Desktop/MotionGPT/generate_motion.py", line 121, in <module>
    main()
  File "/Users/nickanastasoff/Desktop/MotionGPT/generate_motion.py", line 72, in main
    model = LLaMA(config)
  File "/Users/nickanastasoff/Desktop/MotionGPT/lit_llama/model.py", line 47, in __init__
    h=nn.ModuleList([Block(config) for _ in range(config.n_layer)]),
  File "/Users/nickanastasoff/Desktop/MotionGPT/lit_llama/model.py", line 47, in <listcomp>
    h=nn.ModuleList([Block(config) for _ in range(config.n_layer)]),
  File "/Users/nickanastasoff/Desktop/MotionGPT/lit_llama/model.py", line 96, in __init__
    self.attn = CausalSelfAttention(config)
  File "/Users/nickanastasoff/Desktop/MotionGPT/lit_llama/lora.py", line 198, in __init__
    self.c_attn = MergedLinear(
  File "/Users/nickanastasoff/Desktop/MotionGPT/lit_llama/lora.py", line 52, in __init__
    nn.Linear.__init__(self, in_features, out_features, **kwargs)
  File "/Users/nickanastasoff/miniconda3/envs/motiongpt/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 96, in __init__
    self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
  File "/Users/nickanastasoff/Desktop/MotionGPT/lit_llama/utils.py", line 110, in __torch_function__
    return func(*args, **kwargs)
RuntimeError: MPS backend out of memory (MPS allocated: 8.34 GB, other allocations: 700.38 MB, max allowed: 9.07 GB). Tried to allocate 300.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
(motiongpt) nickanastasoff@Nicks-Air motiongpt % 

any ideas on where I would need to change to fix this? thanks!

qiqiApink commented 1 year ago

The inference code requires about 53 GB of GPU memory for MotionGPT-13B model and about a half for MotionGPT-7B model.

NickAnastasoff commented 1 year ago

Oh wow! Thanks for the reply. I guess it will be a while before I can run this on my Mac haha

gkuberreddy commented 10 months ago

The inference code requires about 53 GB of GPU memory for MotionGPT-13B model and about a half for MotionGPT-7B model.

Do you have a way currently in your code to distribute the inference and/or training among multiple GPUs? Since I don't see that currently (or) I may have missed it. Thanks for your response in advance and kudos to your great work..:)