I understand the reasoning for the 1.0<=x<=2.0 limitation given the BigInt-fallback for exact calculation. There are potential alternatives, e.g. expansion arithmetic (https://www.cs.cmu.edu/afs/cs/project/quake/public/code/predicates.c) with Float64, which would at least work for numbers in a reasonable range (roughly n-th root of the largest representable number, where n is the degree of the polynomial in the predicate), or expansion arithmetic with BigFloat which should work for arbitrary inputs.
This would come at the slight performance cost of losing the static filters with only the absolute error bound. I'd expect that to be around ~2-5% based on tests in C++. I'd be willing to prepare a PR but since this is non-negligible work, I wanted to ask first whether this compromise, loss of purely static filtering to overcome the limitation of a fixed coordinate range, is compatible with the goals of this library.
I understand the reasoning for the 1.0<=x<=2.0 limitation given the BigInt-fallback for exact calculation. There are potential alternatives, e.g. expansion arithmetic (https://www.cs.cmu.edu/afs/cs/project/quake/public/code/predicates.c) with Float64, which would at least work for numbers in a reasonable range (roughly n-th root of the largest representable number, where n is the degree of the polynomial in the predicate), or expansion arithmetic with BigFloat which should work for arbitrary inputs.
This would come at the slight performance cost of losing the static filters with only the absolute error bound. I'd expect that to be around ~2-5% based on tests in C++. I'd be willing to prepare a PR but since this is non-negligible work, I wanted to ask first whether this compromise, loss of purely static filtering to overcome the limitation of a fixed coordinate range, is compatible with the goals of this library.