RachelCmy / pinf_smoke

The pytorch implementation of the paper, "Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data" (SIGGRAPH 2022 https://rachelcmy.github.io/pinf_smoke/), from M. Chu, L. Liu, Q. Zheng, E. Franz, H.P. Seidel, C. Theobalt, R. Zayer.
Other
44 stars 7 forks source link

Meaning of parameters in info.json #1

Open watashihame opened 2 years ago

watashihame commented 2 years ago

Thank you for releasing the code of your extraordinary work.

I am trying to reconstruct smoke from my data with your code. But I am stuck on creating the info.json of my data.

Such as parameters in data/Sphere/info.json

    "voxel_scale": 12.0064,
    "voxel_matrix": [
        [
            0.4000000059604645,
            0.0,
            0.0,
            -2.419999837875366
        ],
        [
            0.0,
            -1.7484556025237907e-08,
            -0.4000000059604645,
            2.460000514984131
        ],
        [
            0.0,
            0.4000000059604645,
            -1.7484556025237907e-08,
            -2.7099997997283936
        ],
        [
            0.0,
            0.0,
            0.0,
            1.0
        ]
    ],
    "render_center":[
        0.0, 
        0.0, 
        0.0
    ],
    "near":2.0,
    "far":6.0,
    "phi":-30.0,
    "rot":"Z"

Could you tell me what these parameters refer to? Thank you.

RachelCmy commented 2 years ago

Hi,

parameters in data/Sphere/info.json are set according to our synthetic data based on Blender and Openvdb. "voxel_scale" is the size of the voxel in the world coord. For example, our synthetic Sphere data use:

"voxel_matrix" is the transformation matrix of the smoke data in blender, which is set by bpy.context.scene.objects['our_fluid_data'].matrix_world

"voxel_scale" and "voxel_matrix" are used to transfer coord. between voxel space and the world space. There is little explanation at: https://github.com/RachelCmy/pinf_smoke/blob/4b7cb02c2890ab42124aab9460eddbe108bc4666/run_pinf_helpers.py#L487

RachelCmy commented 2 years ago

"render_center", "near", "far", "phi","rot" are used to set the poses for spherical rendering (https://github.com/RachelCmy/pinf_smoke/blob/e8999f902f31ef33ff8bbc6f07bfde7e216f6fce/load_pinf.py#L160)

watashihame commented 2 years ago

Thank you for the answer.

But how should I set these values for a series of real captured data when I don't know the voxel_scale and voxel_transform? Like ones in data/ScalarReal/info.json https://github.com/RachelCmy/pinf_smoke/blob/4b7cb02c2890ab42124aab9460eddbe108bc4666/data/ScalarReal/info.json#L189-L219

Or just set them into some default values?

RachelCmy commented 2 years ago

Hi,

for real data, voxel_transform can be set to an identity matrix. render_center should be the center of the volume ( some positions near the intersection point of some camera directions). voxel_scale determines the size of the visible volumetric area (maybe try something between distance_from_camera_to_render_center * tan(.5 * camera_angle_x) and 2*distance_from_camera_to_render_center * tan(.5 * camera_angle_x))

Networks are only trained by sampling inside the volumetric area. So if it is too little, the networks are not able to reconstruct things outside. If it is too large, the reconstruction will be blurry since the sampling is too sparse.