Closed msokalski closed 2 years ago
Hello, @msokalski, and thanks for creating the issue. This is a very interesting find, and I will look into it when I get some spare time.
Here are some profiling data.
Code:
namespace
{
template <typename T>
Vertices<T> genData(const std::size_t size)
{
Vertices<T> vv;
vv.reserve(size + 1);
vv.push_back(V2d<T>::make(0, 1e-15));
const std::array<T, 2> range = {-1, 1};
std::uniform_real_distribution<T> unif(range[0], range[1]);
std::default_random_engine re;
for(std::size_t i = 0; i < size; ++i)
vv.push_back(V2d<T>::make(unif(re), 0));
return vv;
}
} // namespace
TEST_CASE("For_profiling_100k", "")
{
const auto vv = genData<double>(1e5);
auto cdt = Triangulation<double>();
cdt.insertVertices(vv);
cdt.eraseSuperTriangle();
}
Flamegraph: flamegraph_100k.zip
Flamegraph for 1M points: flamegraph_1m.zip
@msokalski
I have re-written the way triangulation is finalized and triangles are removed.
This significantly speeds-up the eraseXXX
methods for the case you provided.
Could you please check-out the branch: https://github.com/artem-ogre/CDT/tree/84-improve-eraseXXX-perf and report your findings?
Excellent boost! Thank you.
before:
generating random 1000000 points
cdt removing dups... 270 ms
cdt triangulation... 3141 ms
cdt erasing super... 126413 ms
after:
generating random 1000000 points
cdt removing dups... 286 ms
cdt triangulation... 3035 ms
cdt erasing super... 1382 ms
@msokalski
I have done another improvement: triangles by vertex will not be re-calculated when doing calling erase...
methods.
If necessary triangles by vertex can be calculated using the new helper function calculateTrianglesByVertex
.
So it is 'pay only for what you use' which makes eraseSuperTriangle
another 35% faster.
after after:
generating random 1000000 points
cdt removing dups... 220 ms
cdt triangulation... 2565 ms
cdt erasing super... 795 ms
Well well, excellent job! Thanks. EDIT: is it going to be merged into master anytime?
@msokalski It is in now. I was busy adding some automated tests on master, but now I am more confident that this change does not break anything.
Thank you! I have a side question regarding degenerated triangulations, should I ask it on a separate issue?
@msokalski Maybe discussions is a better place: https://github.com/artem-ogre/CDT/discussions/categories/q-a
New issue works as well
I'm very happy with a stability and overall performance of CDT, great job! The only one pitfall I'm suffering from is a performance of
eraseSuperTriangle()
. Looks like its time complexity grows quadratically with number of triangles it actually has to remove. I think it's worth looking into it for optimization possibilities.Please find below my experiment results: My 'malicious' data set is many many points spread along X axis (x=[+1..-1], y=0), and 1 point just tiny bit above them all (x=0, y=1.E-15).
100K points:
1M points:
10M points: