Closed SeanChenxy closed 3 weeks ago
Hi, thanks for sharing codes. Could you give any guidance on how to use CLI mode to run your model?
@SeanChenxy Thanks for your attention.
We now provide a demo to show the inference procedure. Could you please specify your demand in more detail?
I think the demo is based on Gradio. Can I run inference without Gradio?
I think the demo is based on Gradio. Can I run inference without Gradio?
Definitely. We first test it in the command line (input data path and the user question) and return the model output.
I think the demo is based on Gradio. Can I run inference without Gradio?
Definitely. We first test it in the command line (input data path and the user question) and return the model output.
So how can I do this? For example, python xxx.py --video xxx --prompt xxx
Aha, we put a bit of effort into transferring what you want to the gradio demo. I will update this soon.
Do you have any plan for this update? I am looking forward to trying it.
Do you have any plan for this update? I am looking forward to trying it.
@SeanChenxy Thanks for the reminder. Perhaps in 3-5 days. Huggingface is now reaching out to us to deploy MotionLLM on their free GPU. This milestone is now the first priority of our stack.
@SeanChenxy I have supported this issue. Please refer to the latest readme.
Hi, thanks for sharing codes. Could you give any guidance on how to use CLI mode to run your model?