PRBonn / vdbfusion

C++/Python Sparse Volumetric TSDF Fusion
MIT License
474 stars 53 forks source link

Realtime Point Cloud extraction #22

Closed paucarre closed 1 year ago

paucarre commented 2 years ago

Introduction

For real-time applications, it is very useful to visualize the volume, for instance for debugging. This can be done currently by using marching cubles which generates a mesh. The problem is that generating meshes is very computationally expensive, unless recent commits to OpenVSB make it fast and high-quality enough. This implies, the volume currently cannot be displayed in real time.

Besides, to do ICP to correct TFs/Frames, which is necessary to do real-time, it'd be necessary to have cloudpoints in realtime, which, as said, it is not currently possible. Note that to correct errors that occur during integration ( for instance "Lady and Cow" ), ICP would potentially help. Also note, this is used in Voxblox

Note that Open3D, in its tensor SLAM pipeline, does implement this and does work realtime ( example )

Proposed Work

1 - Test whether recent changes in OpenVDB make generation of meshes quality and speed wise good enough. Also analyze different MeshToVolumeFlags 2 - In case point (1) does not hold true, then implement either in OpenVDB or in vdbfusion a way to quickly extract pointclouds ( maybe optionally with normals) in realtime.

Questions

I'd like to know. 1- Generally what's your opinion/POV regarind this line of work in terms of utility, feasabliitty etc. 2- Whether that's something you have planned to do or will do 3- Whether you need extra help with that.

mpottinger commented 1 year ago

I am just getting started with vdbfusion/OpenVDB. Have you made any attempts at this, what is the current speed/FPS at which you can extract the points? vdbfusion demo for the kitti dataset does fusion at around 17 FPS on my Macbook M1 Pro with the current single threaded implementation. I wouldn't expect pointcloud extraction to be much slower than fusion, what do you think? If you implement multithreading with OpenVDB via TBB as in the OpenVDB examples to read all set values it may be fast enough. I will of course need to try. In the past I've used the GPU for pointclouds though for any kind of real speed and OpenVDB doesn't support GPU except via NanoVDB.

nachovizzo commented 1 year ago

Hello, sorry for the long silence.

This is interesting indeed, but as @mpottinger mentioned, we should use TBB or any other multithread backend to speed up stuff, since this will only be used for visualization purposes.

@paucarre:

1- Generally what's your opinion/POV regarind this line of work in terms of utility, feasabliitty etc. : I think it would be a nice feature to have, although as you can already see, I don't have any time left to supervise/support this matter. 2- Whether that's something you have planned to do or will do At some point, wanted to yes 3- Whether you need extra help with that. If you are still interested :) go for it

On the other hand, the mesh extraction method from OpenVDB is quite different from marching cubes, and they exploit the fact that they have "manifolds", meaning that they have closed surface, which is not our case. If you want to see how it looks, just open the OpenVDB visualizer, and you will see a lot of difference with our system.

@mpottinger

If you are feeling like implementing, go for it, although I'm currently overloaded with other stuff and I still need to review @paucarre's PR adding color support. So if you don't mind this slow-schedule for your contribution, just go for it.

mpottinger commented 1 year ago

I have implemented this myself and it was really straightforward, just added a function to iterate over all set values within a certain tsdf range. Using the python wrapper and open3d I am able to scan a room and view the scanning in realtime from my iPhone, works very well, no speed issues even single threaded. OpenVDB really seems like a godsend/holy grail for 3D scanning.

nachovizzo commented 1 year ago

@mpottinger I'm really happy to read this, one of my main goals with vdbfusion was to popularize the use of VDBs in the robotics/cv domains.

Do you plan to release the source code? Do you have any fancy screenshot?

mpottinger commented 1 year ago

image

mpottinger commented 1 year ago

I'm not sure what I'm going to be doing with my mobile scanning code, but for the point cloud extraction/visualization OpenVDB made that very nice to implement. I have never been able to do it this easily and efficiently before, and I have tried lots of GPU shaders, etc. This is better on CPU only. It's like a godsend, so much more efficient.

Extracting the entire point cloud on low resolution for every frame worked for me, but at higher resolutions the OpenVDB structure was very useful, I just keep track of what points have been recently updated with a tsdf value and only stream out those points to the Open3D viewer incrementally. I was never able to achieve this with anything else satisfactorily, with this, it was easy.

I added a few functions and updated the Python bindings to call them:

namespace vdbfusion {

VDBVolume::VDBVolume(float voxel_size, float sdf_trunc, bool space_carving / = false/) : voxelsize(voxel_size), sdftrunc(sdf_trunc), spacecarving(spacecarving) { tsdf = openvdb::FloatGrid::create(sdftrunc); tsdf->setName("D(x): signed distance grid"); tsdf->setTransform(openvdb::math::Transform::createLinearTransform(voxelsize)); tsdf_->setGridClass(openvdb::GRID_LEVEL_SET);

weights_ = openvdb::FloatGrid::create(0.0f);
weights_->setName("W(x): weights grid");
weights_->setTransform(openvdb::math::Transform::createLinearTransform(voxel_size_));
weights_->setGridClass(openvdb::GRID_UNKNOWN);

colors_ = openvdb::Vec3IGrid::create(openvdb::Vec3I(0.0f, 0.0f, 0.0f));
colors_->setName("C(x): colors grid");
colors_->setTransform(openvdb::math::Transform::createLinearTransform(voxel_size_));
colors_->setGridClass(openvdb::GRID_UNKNOWN);

// was_updated_ is a grid that stores whether the voxel has been updated since viewing live pointcloud (for accelerating live pointcloud vizualization)
was_updated_ = openvdb::BoolGrid::create(false);
// was_viewed is a grid that stores whether the voxel has had a point extracted from it (for accelerating live pointcloud vizualization)
was_viewed_ = openvdb::FloatGrid::create(sdf_trunc_);

}

// GetNormal - gets the normal of a voxel in the TSDF volume Eigen::Vector3d VDBVolume::GetNormal(const openvdb::Coord &voxel) const { const auto &xform = tsdf_->transform(); const auto voxel_center = GetVoxelCenter(voxel, xform); const auto voxel_size = xform.voxelSize(); const auto voxel_size_x = voxel_size.x(); const auto voxel_size_y = voxel_size.y(); const auto voxel_size_z = voxel_size.z(); const auto tsdfacc = tsdf->getConstAccessor(); const auto tsdf = tsdf_acc.getValue(voxel); const auto x = voxel.x(); const auto y = voxel.y(); const auto z = voxel.z();

const auto tsdf_xp = tsdf_acc.getValue(openvdb::Coord(x + 1, y, z));
const auto tsdf_xn = tsdf_acc.getValue(openvdb::Coord(x - 1, y, z));
const auto tsdf_yp = tsdf_acc.getValue(openvdb::Coord(x, y + 1, z));
const auto tsdf_yn = tsdf_acc.getValue(openvdb::Coord(x, y - 1, z));
const auto tsdf_zp = tsdf_acc.getValue(openvdb::Coord(x, y, z + 1));
const auto tsdf_zn = tsdf_acc.getValue(openvdb::Coord(x, y, z - 1));

const auto dx = (tsdf_xp - tsdf_xn) / (2 * voxel_size_x);
const auto dy = (tsdf_yp - tsdf_yn) / (2 * voxel_size_y);
const auto dz = (tsdf_zp - tsdf_zn) / (2 * voxel_size_z);

const auto normal = Eigen::Vector3d(dx, dy, dz);
return normal.normalized();

}

// extract a color point cloud from the TSDF volume (extracts ALL points, a faster live preview but lower quality version below) std::tuple<std::vector,std::vector,std::vector> VDBVolume::ExtractPointCloud(float thresh, float min_weight) const { std::vector points; std::vector colors; std::vector normals;

auto weights_acc = weights_->getConstAccessor();
auto colors_acc = colors_->getConstAccessor();

for (auto iter = tsdf_->cbeginValueOn(); iter.test(); ++iter) {
    const auto &voxel = iter.getCoord();
    // get tsdf value and convert tsdf to positive distance
    const auto &tsdf = iter.getValue();
    const auto tsdf_abs = std::abs(tsdf);
    if (tsdf_abs < thresh) {
        const auto& weight = weights_acc.getValue(voxel);
        if (weight > min_weight) {
            const auto& color = colors_acc.getValue(voxel);
            const auto point = GetVoxelCenter(voxel, tsdf_->transform());
            // get normal from tsdf
            const auto normal = GetNormal(voxel);
            points.push_back(Eigen::Vector3d(point.x(), point.y(), point.z()));
            colors.push_back(Eigen::Vector3d(color.x(), color.y(), color.z()));
            normals.push_back(normal);
        }
    }
}
return std::make_tuple(points, colors, normals);

}

std::tuple<std::vector,std::vector> VDBVolume::ExtractPointCloudLivePreview(float thresh, float min_weight) const { // does the same as ExtractPointCloud but only extracts voxels that are marked as wasupdated == true and wasviewed == false std::vector points; std::vector colors; auto tsdfacc = tsdf->getConstAccessor(); auto weightsacc = weights->getConstAccessor(); auto colorsacc = colors->getConstAccessor(); auto was_viewed_acc = wasviewed->getUnsafeAccessor(); auto was_updated_acc = wasupdated->getUnsafeAccessor(); // iterate over wasupdated grid instead of tsdf_ grid for (auto iter = wasupdated->beginValueOn(); iter.test(); ++iter) { const auto &voxel = iter.getCoord(); //iter.setValueOff(); // set that we've already processed this voxel iter.setValue(false); // set that we've already processed this voxel iter.setValueOff(); // set that we've already processed this voxel

    // get tsdf value and convert tsdf to positive distance
    const auto &tsdf = tsdf_acc.getValue(voxel);
    const auto tsdf_abs = std::abs(tsdf);
    const auto &viewed_tsdf = was_viewed_acc.getValue(voxel);
    // if viewed value is close to the tsdf value, then we've already processed this voxel
    const auto diff = std::abs(viewed_tsdf - tsdf);
    if (diff < sdf_trunc_ / 2) { // if tsdf value is close enough to last seen, don't try to extract a point.
        continue;
    }
    // set was_viewed_ to true so that this voxel is not extracted again
    was_viewed_acc.setValue(voxel, tsdf);

    if (tsdf_abs < thresh) {
        const auto& weight = weights_acc.getValue(voxel);
        if (weight > min_weight) {
            const auto& color = colors_acc.getValue(voxel);
            const auto point = GetVoxelCenter(voxel, tsdf_->transform());
            // TODO: add bloom filter as second step to make this even more efficient 
            // (prevent duplicate points more effectively)
            points.push_back(Eigen::Vector3d(point.x(), point.y(), point.z()));
            colors.push_back(Eigen::Vector3d(color.x(), color.y(), color.z()));
        }
    }
}
return std::make_tuple(points, colors);

}

mpottinger commented 1 year ago

https://gist.github.com/mpottinger/7dcdd68c1113f81c8e196f7874ea2e64

mpottinger commented 1 year ago

https://youtu.be/OMgjV0DgDzc

nachovizzo commented 1 year ago

wow! This is really impressive! Thanks for sharing.

mpottinger commented 1 year ago

wow! This is really impressive! Thanks for sharing.

Wouldn't be possible without your work and OpenVDB ;) I'm just building on top of it. I did not know how to do TSDF fusion on sparse volumes before, I have only implemented the traditional full dense voxel grid on GPU.

I'm really impressed how easily and well it all works. A shame it is not more well known. I had previously attempted to do this via voxel hashing, which did sort of work, but not nearly as efficiently. This is all real time on a single CPU thread! Mind blowing really.

I have also gotten rendering via ray marching working on CPU, but it looks like NanoVDB on GPU will be required for it to be practical.