jinseokbae / neural_marionette

Official pytorch implementation of "Neural Marionette: Unsupervised Learning of Motion Skeleton and Latent Dynamics from Volumetric Video" (AAAI 2022, Oral)
18 stars 3 forks source link

Dataset Generation Process #1

Closed ktertikas closed 2 years ago

ktertikas commented 2 years ago

Hello and thank you for your very interesting work!

I wanted to ask you, could you please provide us with the dataset generation process? I am mainly interested in the DeformingThings4D splits, and I can see that you are loading numpy arrays. How are you generating these arrays? What is the folder structure?

Best, Konstantinos

jinseokbae commented 2 years ago

Hi, first of all, thank your for your interest in our research. As mentioned in the paper, our model does not require highly pre-processed input data. Therefore, only thing you need to prepare is streams of the point cloud.

For the animal split in the DeformingThings4D dataset, we randomly sampled number of points from the mesh surface which is stored in .anime file format. You may utilize trimesh python library when sampling points from the mesh. You can check on the github page on the dataset for the detailed explanation of the .anime file.

Folder structure is as follows

data/DeformingThings4D/animals
    train
        bull
            seq1_for_bull
            ....
        canie
        ...
    test
        ...

However converting folder hierarchy is an absolutely user's choice, so you can use your own folder structure by just changing a few lines of the code in the dataset.py

Best, Jinseok

ktertikas commented 2 years ago

Thank you for your detailed answer, looking through the code I found the preprocessing you were doing for DFAUST (https://github.com/jinseokbae/neural_marionette/blob/main/dataset/dfaust/write_sequence_to_obj.py), so I guess it's a similar procedure being followed for DeformingThings4D as well (providing that someone has already extracted the meshes).

Best, Konstantinos