Open mglisse opened 5 years ago
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Exact_predicates_exact_constructions_kernel.h>
#include <CGAL/Alpha_shape_3.h>
#include <CGAL/Alpha_shape_cell_base_3.h>
#include <CGAL/Alpha_shape_vertex_base_3.h>
#include <CGAL/Delaunay_triangulation_3.h>
#include <CGAL/point_generators_3.h>
#include <CGAL/Timer.h>
#if VERS == 1
typedef CGAL::Exact_predicates_inexact_constructions_kernel Gt;
typedef CGAL::Tag_false Alpha_cmp_tag;
#elif VERS == 2
typedef CGAL::Exact_predicates_inexact_constructions_kernel Gt;
typedef CGAL::Tag_true Alpha_cmp_tag;
#else
typedef CGAL::Exact_predicates_exact_constructions_kernel Gt;
typedef CGAL::Tag_false Alpha_cmp_tag;
#endif
typedef CGAL::Alpha_shape_vertex_base_3<Gt,CGAL::Default,Alpha_cmp_tag> Vb;
typedef CGAL::Alpha_shape_cell_base_3<Gt,CGAL::Default,Alpha_cmp_tag> Fb;
typedef CGAL::Triangulation_data_structure_3<Vb,Fb> Tds;
typedef CGAL::Delaunay_triangulation_3<Gt,Tds> Triangulation_3;
typedef CGAL::Alpha_shape_3<Triangulation_3,Alpha_cmp_tag> Alpha_shape_3;
typedef Gt::Point_3 Point;
int main()
{
CGAL::Random_points_in_sphere_3<Point> gen;
std::vector<Point> lp;
for(int i=0;i<5000;++i) lp.push_back(*gen++);
CGAL::Timer time; time.start();
Alpha_shape_3 as(lp.begin(),lp.end(),0,Alpha_shape_3::GENERAL);
time.stop(); std::cerr<<time.time()<<' ';
time.reset(); time.start();
std::size_t n; CGAL::Counting_output_iterator it(&n);
as.filtration_with_alpha_values(it);
time.stop(); std::cerr<<time.time()<<'\n';
}
One might naively expect that for VERS in 1, 2, 3, the running time increases. However, doing 3 runs for each, I get:
1 0.152342 0.069808 0.164331 0.068143 0.165048 0.070491 2 0.34832 2.39144 0.364394 2.40164 0.353964 2.37644 3 0.252748 0.096083 0.264849 0.097304 0.258651 0.096067
I am pleasantly surprised at how fast Epeck is, even in the first phase.
The hack mentioned is
--- /usr/include/CGAL/internal/Lazy_alpha_nt_3.h 2018-10-01 11:39:04.000000000 +0200
+++ i/CGAL/internal/Lazy_alpha_nt_3.h 2018-11-17 22:34:50.892247764 +0100
@@ -83,6 +83,17 @@
const T* p1;
const T* p2;
const T* p3;
+ bool is_same(Input_points_for_lazy_alpha_nt_3 const&o)const{
+ if(nbpts!=o.nbpts)return false;
+ if(p0!=o.p0)return false;
+ if(nbpts==1)return true;
+ if(p1!=o.p1)return false;
+ if(nbpts==2)return true;
+ if(p2!=o.p2)return false;
+ if(nbpts==3)return true;
+ if(p3!=o.p3)return false;
+ return true;
+ }
};
//non-weighted case
@@ -305,8 +316,10 @@
Uncertain<bool> res = this->approx() CMP other.approx(); \
if (res.is_certain()) \
return res; \
- else \
+ else { \
+ if(data().is_same(other.data())) return 42 CMP 42; \
return this->exact() CMP other.exact(); \
+ } \
} \
CGAL_LANT_COMPARE_FUNCTIONS(<)
which gives
2 0.345235 0.149817 0.340077 0.147941 0.356885 0.153137
Still not as good as Epeck, but at least it takes less than twice as long.
The exactTag feature was https://cgal.geometryfactory.com/CGAL/Members/wiki/Features/Small_Features/Filtered_alpha_shapes_traits . It came with a benchmark "Time to build the alpha-shape from a regular triangulation of 4251 wpoints" which gave .36s for Epick+false, .52s for Epick+true, and .91s for Epeck. Comparing with the numbers I posted above (which may not be fair), it seems that Epeck has made progress while Epick+true has regressed?
I ran the benchmark with another model (can't find the original one) and I got:
EPICK+false: 0.113218
EPICK+true: 0.282292
EPECK: 0.169083
EPECK
clearly made huge progresses as it almost matches the runtime of EPICK+false
. I don't think EPICK+true
regressed but I'm very curious to understand where the progresses of EPECK
come from and how to replicate it to Lazy_alpha_nt_3
(your patch is an additional optimization but we should at least be able to be faster than EPECK without it).
For the Delaunay construction, Epick and Epeck should be using the same static and interval filters, so it isn't surprising that this part is comparable, but then it should also be the same with Tag_true. So the difference should be in the computation of the alphas. Lazy_alpha_nt_3 doesn't do any sharing, so you may end up evaluating the "same" exact value several times. Also, Lazy_exact_nt comparisons already check for identity between the operands (what my patch tries to do, but it is easier to check "same address" than "same list of points"). I didn't try to investigate closely where the time was going in the alpha-complex construction, since it was dominated by the filtration computation.
If Epick+true has regressed, #1718 looks like a possible culprit (that would make it my fault...).
Alpha_shape_3, when we use Epick and ExactAlphaComparisonTag is Tag_true, uses a lazy number type to represent alpha values. Note that I only care about the GENERAL case, not REGULARIZED. One of the main operations in filtration_with_alpha_values is that it sorts the alpha values. Remember that in an alpha-complex, it is quite common for a triangle to have the same alpha value as one of its adjacent tetrahedra. We thus end up calling update_exact on most of the values, and the computation takes more than 10 times as long as with Tag_false, even for completely non-degenerate data.
As an experiment, in the comparison operators of Lazy_alpha_nt_3, when the filter fails, I hacked in a check whether the 2 have the same Data_vector and return without going through update_exact in that case. I noticed 10+ times improvement on the running time of filtration_with_alpha_values.
The comparison operators for Epeck's Lazy_exact_nt check
a.identical(b)
, I need to check if that makes it immune to this problem or not."Optimizing" the comparison
a <= a
is a hack. Alpha_shape_3 should know structurally that the edge AB must has an alpha value <= the triangle ABC. It knew it when it assigned a value to the edge AB.