mihaibujanca / dynamicfusion

Implementation of Newcombe et al. CVPR 2015 DynamicFusion paper
BSD 3-Clause "New" or "Revised" License
394 stars 105 forks source link

Some questions about Function "device::integrate(dists, volume, aff, proj)" #34

Open HaoguangHuang opened 6 years ago

HaoguangHuang commented 6 years ago

Hi, @mihaibujanca ,

After running your awesome dynamicfusion project , I found it didn't work very well on the umberalla dateset you provide. Especially when the umberalla close gradually, the fusioned visible model cannot catch up with the change of object shape.

So I read your code and try to find whether bugs exist or not. Considering the function "device::integrate" in file tsdf_volume.cu , I pasted some code as below,

float3 vx = make_float3(x * volume.voxel_size.x, y * volume.voxel_size.y, 0);
float3 vc = vol2cam * vx; //tranform from volume coo frame to camera one
...
float sdf = Dp - __fsqrt_rn(dot(vc, vc)); //Dp - norm(v)

It is so confusing that why integration process can be indepent of function 'psdf'. According to my understanding of the paper

DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time

, I think the sdf value should be computed as:

float3 vx = make_float3(x * volume.voxel_size.x, y * volume.voxel_size.y, 0);
float3 vc = vol2cam * vx; //tranform from volume coo frame to camera one
vc = warped(vc);//here 'warped' is pseudocode, representing the transformation from model coordinate into live frame coordinate
...
float sdf = Dp - __fsqrt_rn(dot(vc, vc)); //Dp - norm(v)

I am wondering whether my idea is true or not. Does the variable 'vc' need to be transformed with a per-voxel warp before computing sdf?

mihaibujanca commented 6 years ago

I've just added a notice on the readme that the project is still under development. The current tsdf::integrate is leftover from the kinectfusion implementation this is based upon.

In terms of how the sdf should be computed, I think you'd first warp and then transform into the camera coordinate

HaoguangHuang commented 6 years ago

yep, as for how to compute psdf, your code of that module is quite clear and I think there is no doubt on it.

But as for the function "kfusion::cuda::TsdfVolume::surface_fusion", code that confused me are listed as below:

std::vector<float> ro = psdf(warped, depth, intr);
    cuda::Dists dists;
    cuda::computeDists(depth, dists, intr); //pre_compute dists from depth map
    integrate(dists, camera_pose, intr); // why only use one affine3d transformation in this process???

    for(size_t i = 0; i < ro.size(); i++)
    {
        if(ro[i] > -trunc_dist_)
        {
            warp_field.KNN(canonical[i]);
            float weight = weighting(*(warp_field.getDistSquared()), KNN_NEIGHBOURS);
            float coeff = std::min(ro[i], trunc_dist_);

//            tsdf_entries[i].tsdf_value = tsdf_entries[i].tsdf_value * tsdf_entries[i].tsdf_weight + coeff * weight;
//            tsdf_entries[i].tsdf_value = tsdf_entries[i].tsdf_weight + weight;
//
//            tsdf_entries[i].tsdf_weight = std::min(tsdf_entries[i].tsdf_weight + weight, W_MAX);
        }
    }

I think the variable 'ro', which is the return of funcion 'psdf', should be used in function 'integrate', to act as the sdf value of live frame integration.