Hi, thanks for sharing your amazing work. I have two questions about the demo you provided.
(1) I noticed that you posted "Try our 💥 [Online Demo] here, which is integrated into [ImageBind-LLM]" and I entered the web link. But it seems the SPHINX-MLLM Demo, and I don’t see the entrance to submit point cloud data. So where can I submit point cloud data and generate corresponding descriptions like in the paper?
Hi, thanks for sharing your amazing work. I have two questions about the demo you provided.
(1) I noticed that you posted "Try our 💥 [Online Demo] here, which is integrated into [ImageBind-LLM]" and I entered the web link. But it seems the SPHINX-MLLM Demo, and I don’t see the entrance to submit point cloud data. So where can I submit point cloud data and generate corresponding descriptions like in the paper?
(2) Based on the first questions, in addition to the online demo, can you provide the inference code? For example, something like SPHINX(https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX)?
Thanks.