gazebosim / gz-common

An audio-visual library supports processing audio and video files, a graphics library can load a variety 3D mesh file formats into a generic in-memory representation, and the core library of Gazebo Common contains functionality that spans Base64 encoding/decoding to thread pools.
https://gazebosim.org
Apache License 2.0
16 stars 40 forks source link

SubMesh::RecalculateNormals improvement #609

Closed ntfshard closed 4 months ago

ntfshard commented 6 months ago

🦟 Bug fix

Fixes #N/A

Summary

Previous version of RecalculateNormals member function has a O(n^2) complexity because inner loop where we are looking for normal elements. But we already know indexes of this elements. So we can get rid of inner loop. And seems we should check indices.size() < 3u || indices.size() % 3u != 0 due to if it will be 4, main loop will start and on second iteration it will start reading out of array. Also seems condition in the begin should check amount of indexes/vector, but not normals, due to we are resizing it later. Maybe this function also should have a check of enum PrimitiveType of data, seems attempt to call it for POINTS and LINES is not legit (for rest 4 types maybe it's ok, not sure)

Checklist

Note to maintainers: Remember to use Squash-Merge and edit the commit message to match the pull request summary while retaining Signed-off-by messages.

ntfshard commented 6 months ago

I force-pushed commit due to previous one was not signed and I made boundary check more strict

iche033 commented 6 months ago

looks like there is still an issue with DCO, can you try fix it again?

iche033 commented 6 months ago

I was testing this with a capsule shape and I noticed that a seam is now visible after the changes:

Before:

cylinder_normal_before

After:

cylinder_normal_after

The RecaculateNormals function was ported from gazebo-classic so I went through it's history and found this: https://github.com/gazebosim/gazebo-classic/commit/ea54b563a82198fafc42744d5aad60ecad86e1d0

From the commit msg, the loop was actually implemented on purpose to make objects appear smoother. The effect of that loop is that more normals are updated instead of limiting to 3 normals. Not efficient but produces smoother results. I'm leaning towards keeping the current code as it was intentional.

ntfshard commented 6 months ago

I see. Thank you for testing and clarification. The main problem with current code that it could work about a minute if polygon has quite a lot of triangles even on a modern hardware. Maybe I can try to re-implement initial behaviour with unordered_map. I guess it should be ok to exchange execution time to a memory consumption spike

iche033 commented 6 months ago

Maybe I can try to re-implement initial behaviour with unordered_map. I guess it should be ok to exchange execution time to a memory consumption spike

yes that would be good to try, thanks

ntfshard commented 4 months ago

Sorry for a delay. Here is an another solution. We are grouping all points by x coordinate and check only points with a same x coordinate (and adjacent groups, due to operator== for Vector3d is actually treat equal a coordinate in a plus/minus Epsilon range.)

For a relatively small sub-meshes it works slower(on my hardware, around 8 milliseconds) than previous one. It looks like a minimal running time constant. Not sure how critical is it. If it's a showstopper, I can dispatch algorithms according to a input size, but it will make code more complex(for a future readers), that's why I came with this version first.

Also not sure what with Jenkins CI, I tried to run this binaries locally and they seems working fine UPD: seems no problem, I'm just a stupid

And the last thing, I don't have a full gazebo setup in current environment, can rely only on unit tests (w/o visual manual check)