hustvl / 4DGaussians

[CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
https://guanjunwu.github.io/4dgs/
Apache License 2.0
2.25k stars 187 forks source link

Help for dynamic multiview custom dataset #124

Open chanwo0kim opened 7 months ago

chanwo0kim commented 7 months ago

Hello!

I am very impressed with this research. So I want to run this code on my own dynamic, multiview dataset.

However, I am having a hard time understanding how to set up the dataset.

I used bash multipleviewprogress.sh (youe dataset name) and made the folder as

image

After that, how should I do that? When I use python train.py ~~ then the error "Could not recognize scene type" is raised.

I think the problem is probably because there is no "sparse" folder or "train_meta.json".

So, I used ns-process-data images --data data/your-data --output-dir data/your-ns-data for first frames of each view. However, although Colmap worked well in Colmap GUI or multipleviewprogress.sh, only two camera poses were matched in ns-process-data.

I will wait for your reply. Thank you!

guanjunwu commented 7 months ago

hi, is there a file named points3D_multipleview.ply in your folder? If exists, the format of the dataset will be recognized correctly.

chanwo0kim commented 7 months ago

Hi, thanks for your reply.

Yes. I already have points3D_multipleview.ply in my folder like this. image

Then, can you tell me what to do next for training?

Sorry for taking your time

guanjunwu commented 7 months ago

Hi, it seems it will work. Just change your folder name and put in "data/multipleview/mydataset" Then add a config files "arguments/multipleview/mydataset.py" and run python train.py -s data/multipleview/mydataset --port 6017 --expname "multipleview/mydataset" --configs arguments/multipleview/mydataset.py

chanwo0kim commented 7 months ago

Actually, even though I did that, but the error "Could not recognize scene type!" occurred.

So, I think another execution manner would be to set the data in dynerf format and run it.

Thank you for answer.

chanwo0kim commented 6 months ago

hello! I opened the issue because I have one more question.

I confirmed that the coarse stage is coded to sample uid randomly.

However, from what I understand through the paper, I understand that coarse 3D-GS is optimized on the first frame in the coarse stage, and the deformation field is learned in the fine stage for other time axis.

If I learn in the same way as the code, the 3D Gaussian from the coarse stage will be a 3D Gaussian for the global scene structure, not a 3D Gaussian for a certain frame.

Then, when a motion that significantly changes the global structure is inputted (e.g., the scene itself changes), it is expected that it will not converge properly. Is this correct?

MikeAiJF commented 1 month ago

你好!

我对这项研究印象深刻。 所以我想在我自己的动态多视图数据集上运行此代码。

但是,我很难理解如何设置数据集。

我使用bash multipleviewprogress.sh (youe dataset name)并制作了文件夹作为

图像

那我该怎么办呢? 使用时python train.py ~~出现“无法识别场景类型”的错误。

我认为问题可能是因为没有“稀疏”文件夹或“train_meta.json”。

因此,我使用了ns-process-data images --data data/your-data --output-dir data/your-ns-data每个视图的第一帧。 然而,尽管 Colmap 在 Colmap GUI 或 multipleviewprogress.sh 中运行良好,但在 ns-process-data 中只有两个相机姿势匹配。

我会等待您的回复。谢谢!

你好,请问你解决了吗,希望得到你的帮助