jewettaij / visfd

volumetric image toolkit for simple feature detection (segment tomograms)
MIT License
6 stars 2 forks source link

[Feature Request] improve signal-to-noise of surface geometry near edges #11

Closed jewettaij closed 3 years ago

jewettaij commented 3 years ago

Note: This is a note to remind myself to implement a new feature. Feel free to ignore this rambling discussion.

Statement of the problem

Currently (as of v0.29.3), when detecting surface geometry to a file (eg. a PLY file), an attempt is made to make sure that the resulting surface is as thin and 2-dimensional as possible. To do that, when a voxel which resembles a point on the surface is detected, (ie, a voxel with high "salience"), it's position is shifted to the (estimated) nearest point on the surface before it is saved to the (PLY) file, instead of using the original position of the voxel. (If I don't do something like this, the resulting point cloud can be quite thick, being many voxels wide. This will make surface reconstruction difficult later on. The point cloud really should resemble a thin surface before surface-reconstruction is attempted.)

Details: This is done by diagonalizing the local Hessian of the salience at that voxel's location, and moving in the direction of the eigenvector with the largest eigenvalue until it reaches the ridge peak. This works fine in a region where the surface is clearly visible. But near the edge of a detected surface, it often fails. (By "edge", I mean a location where surface comes to an abrupt end, or there is a hole in the surface. In Cryo-EM tomograms, surface edges are often due to the "missing wedge" artifact.) The problem is: at surface edges, the Hessian is not very accurate. As a result, this strategy shifts the point to the wrong location, and the resulting surfaces tend to have flayed and tattered edges. Later, when we attempt to close these holes (using "surface reconstruction" methods), these tattered edges can cause big spikes in the reconstructed surface (or worse).

Suggested alternative strategy:

While there will never be a way to completely eliminate this problem, we can try to minimize it. As mentioned above, we cannot infer where the surface actually is by only considering the local Hessian at this voxel's location. A single voxel's Hessian is not trustworthy. But after looking at the results of tensor-voting, visually, I concluded that, overall, the voxels with the high surface salience are in the right place. We probably can get a much better idea where the actual surface lies by looking at all of the nearby surface voxel locations and averaging their position them somehow. Averaging many nearby voxel locations would give us an idea where the surface actually is and would be much less subject to noise. But how can we do this without sacrificing resolution?

One strategy would be to start from the current voxel's location, and move in the direction perpendicular (normal) to the surface, keeping track of which voxels were encountered along the way. As you move along the curve, the direction of the surface normal may change, so this will trace out a curve. Keep going in both directions along this curve until the salience drops below the threshold value. (See implementation details below.) Then either: 1) use the position of the voxel with the highest salience on this entire path, or 2) compute a weighted average of the positions of the points on this curve, and use that as the position of this point in the point cloud.

Implementation details: I am inclined to prefer method 2 (average the voxel positions on the curve). Suggested implementation: Assign a weight to each of the points on this based on the salience at those locations. Use these weights to compute the average distance along this 1-D curve. Then find the 3-D location corresponding to that distance along the curve. And use the normal direction for the voxel at this point in the curve, not the original voxel's normal direction. The way, the resulting average position will be a point on this curve.)*

Either way, you can use the results of tensor voting to estimate the surface normal direction instead of the local Hessian of the salience. Even if you can't trust the direction of the surface normal, this curve-tracing strategy insures that the 3D location you pick will be in a region where the bulk of the salience is located, even if it's not the location on the surface closest to the original voxel's staring location.

jewettaij commented 3 years ago

Today I posted a new version of the code which implements idea number 2 from the post above. I wrote:

"Unfortunately, the oriented point-clouds generated by "filter_mrc" do not look noticeably better after this change. (...when viewed visually in meshlab. See the comment in issue #11, for why.) However the number of points on the surface and the length of their surface normals have been adjusted and normalized in this new version of the code. (This is not obvious if you only look at the point cloud in mesh lab.) As a result, if closed-surface reconstruction (a.k.a. "Screened Poisson Reconstruction", "PoissonRecon") is applied to this new point cloud, there will be a modest but somewhat noticeable reduction in bumpiness in the resulting closed surface near the old surface edges."

After analyzing the consequences of this change, I noticed that most of this benefit probably comes simply from weighting surface normal contributions by saliency (which I was not doing earlier), ...not from shifting the positions of the voxels.

Since I failed to adequately shift the voxel positions in this version of the code, I think this idea can be explored further: Currently, only voxels that are already in the target cluster are considered when searching along each curve. Since these tend to be only 1 voxel wide, it means that most nearby voxels are not explored, even if there is a wide blob of voxels with high saliency located there. A more radical approach would consider all voxels whose saliency is higher than some cutoff, and closer than some distance, regardless of whether they belong to this cluster. This could potentially shift voxel positions by an even greater distance (hopefully for the better).

I can explore this idea later. Now, the new code works at least as well as the code from before version v0.27.3. Since I think I have undone most of the damage that I did in my more recent commits, I will close the issue for now.