IDEA-Research / MotionLLM

[Arxiv-2024] MotionLLM: Understanding Human Behaviors from Human Motions and Videos
https://lhchen.top/MotionLLM
Other
180 stars 4 forks source link

CLI mode #2

Closed SeanChenxy closed 3 weeks ago

SeanChenxy commented 1 month ago

Hi, thanks for sharing codes. Could you give any guidance on how to use CLI mode to run your model?

LinghaoChan commented 1 month ago

Hi, thanks for sharing codes. Could you give any guidance on how to use CLI mode to run your model?

@SeanChenxy Thanks for your attention.

We now provide a demo to show the inference procedure. Could you please specify your demand in more detail?

SeanChenxy commented 1 month ago

I think the demo is based on Gradio. Can I run inference without Gradio?

LinghaoChan commented 1 month ago

I think the demo is based on Gradio. Can I run inference without Gradio?

Definitely. We first test it in the command line (input data path and the user question) and return the model output.

SeanChenxy commented 1 month ago

I think the demo is based on Gradio. Can I run inference without Gradio?

Definitely. We first test it in the command line (input data path and the user question) and return the model output.

So how can I do this? For example, python xxx.py --video xxx --prompt xxx

LinghaoChan commented 1 month ago

Aha, we put a bit of effort into transferring what you want to the gradio demo. I will update this soon.

SeanChenxy commented 1 month ago

Do you have any plan for this update? I am looking forward to trying it.

LinghaoChan commented 1 month ago

Do you have any plan for this update? I am looking forward to trying it.

@SeanChenxy Thanks for the reminder. Perhaps in 3-5 days. Huggingface is now reaching out to us to deploy MotionLLM on their free GPU. This milestone is now the first priority of our stack.

LinghaoChan commented 3 weeks ago

@SeanChenxy I have supported this issue. Please refer to the latest readme.