AngusJohnson / Clipper2

Polygon Clipping and Offsetting - C++, C# and Delphi
Boost Software License 1.0
1.44k stars 264 forks source link

C++: exceptions (and floating point scaling) #25

Closed frankbielig closed 1 year ago

frankbielig commented 2 years ago

As a developer I would like to use the C++ library also in projects with disabled exception support.

AngusJohnson commented 2 years ago

OK. Firstly please understand that I'm not overly familiar with C++ so what may be blindly obvious to you and most C++ developers probably won't be to me.

Nevertheless, just baldly stating what you'd like is unlikely to persuade me to do anything. I'll need context as to why this is not only important to you but, more importantly, why this is likely to be important to a lot of other C++ developers. Failing that my strong inclination will be to leave it to you to make your own custom version of the library.

Cheers.

frankbielig commented 2 years ago

First of all, I would like to thank you very much for your activities! Far be it from me to give you any advice.

The use of exceptions does not come without a cost. On on hand, it activates the runtime type information functionality automatically. For one thing, it already makes the code bigger. It can also bring disadvantages depending on the runtime environment. Opinions differ here.

But my main argument is that a library with error management without exception can be used without adjustments in projects with exception usage. It does not work the other way around. Especially now, when you define the API for C++, which includes the use of exceptions, such a decision should be made consciously. Of course, it is entirely up to you.

AngusJohnson commented 2 years ago

Thanks Frank. You seem to be implying that most library users would prefer to see errors handled without throwing exceptions.

haberbyte commented 2 years ago

I would like to use this library in the browser via webassembly, and there it would also be really good to have support for disabling exceptions. I can work on a PR that removes exceptions, and maybe introduce a build script via cmake?

AngusJohnson commented 2 years ago

Exceptions have now been removed 😁.

yoavmil commented 1 year ago

how about:

#define clipper2_use_exceptions // defined at CMAKE defaulted to true
#ifdef clipper2_use_exceptions
  #define THROW(cond, excep) if (cond) throw excep; 
#else 
  #define THROW(cond, excep) {}
#endif

and replace

    if (decimal_prec > 8 || decimal_prec < -8)
      throw Clipper2Exception("invalid decimal precision");

with

THROW((decimal_prec > 8 || decimal_prec < -8), Clipper2Exception("invalid decimal precision"));

BTW I noticed that sometimes you throw a new execption and sometimes without new. Is that intened? Not sure if the same catch statment catches both...

lederernc commented 1 year ago

I really do not like the use of macros, especially when they fundamentally change the behavior of the program.

If we want to remove exceptions then refactoring to prefer error return information (like in C) is probably the best option. Ignoring the issue is not actually helpful.

sergey-239 commented 1 year ago

@AngusJohnson

BTW I noticed that sometimes you throw a new execption and sometimes without new. Is that intened? Not sure if the same catch statment catches both...

indeed. A catch would not catch both. If there is no special reason to throw by pointer (a throw with new ...), then it's much better to throw by reference.

reunanen commented 1 year ago

I guess there could be low-level error-code-returning functions that do not throw, and then convenience wrappers around those that check the return codes and throw if appropriate. People who do not want exceptions could choose to not even build the convenience wrappers.

How do other libraries out there solve this problem? Can't exactly be a unique situation, right?

yoavmil commented 1 year ago

How do other libraries out there solve this problem? Can't exactly be a unique situation, right?

there are several options, each has a down side:

  1. bool foo(Bar& bar); - Return value would be bool, and the method result would be one of the parameters by ref. For me I like that the method result is the return value, because it looks clearer to me. Also the boolean might be ignored. Also, there is no detailed error message.
  2. Bar foo(bool* ok = nullptr); - The return value is the function result, and the user could use the ok bool if he wants to. No error message though (optional: Bar foo(std::string* errMsg = nullptr);)
  3. Bar foo() throw(std::exception) - Use exception and mark them on the method signature. My prefered practice. Also it is similar to C# as I recall.
  4. std::optional<Bar> foo(); - A modern solution. The return value could have a value or be false. Look it up. For me it's a little cumbersome. There's a boost alternative for this.
  5. Wrapping throw statemesnts with a macro as I sugested above.

AFAIK using std::exceptions is the standard. But for rare usage cases, it sounds reasonable to wrap the throw statements with a macro and disable them what so ever. In Clipper2 there are only 4 throws, and they all are about the conversion ratio, so this isn't such a big deal.

sergey-239 commented 1 year ago

At the moment the exceptions are used only to report an out of acceptable range precision. I am not sure that the 1E-8..1E+8 makes any sense. A double could have values in the range from 1.7E-308 to 1.7E+308 with significant decimal digits less than int64 could represent, approximately 15 decimal digits. So, basically the check is not correct at all: the one, for example, could deal with polygons that have vertices with coordinates in the range 1E200 through 1E214 without any troubles. The only thing he would have to ensure that all supplied polygons have vertices in a 1E15 neighborhood. Edit: I mean that it's sufficient to mention this problem in the manual: the one who operates with floating point should already be familiar with all implications incurred by using floating point numbers. Edit2: Moreover, I would propose to add an additional ClipperD(double scale = 1.0) and a convenience static method to determine a maximum base-2 exponent (actually a scale factor, to simplify the use) present in Path(s)D instances to properly calculate a scale for the ClipperD instance to allow as much precision of an original Fp data as possible.

sergey-239 commented 1 year ago

Moreover, I would propose to add an additional ClipperD(double scale = 1.0) and a convenience static method to determine a maximum base-2 exponent (actually a scale factor, to simplify the use) present in Path(s)D instances to properly calculate a scale for the ClipperD instance to allow as much precision of an original Fp data as possible.

See #295, please

AngusJohnson commented 1 year ago

At the moment the exceptions are used only to report an out of acceptable range precision. I am not sure that the 1E-8..1E+8 makes any sense. A double could have values in the range from 1.7E-308 to 1.7E+308 with significant decimal digits less than int64 could represent, approximately 15 decimal digits.

Thanks for the feedback, and I'd be very happy to change this if there's a sensible solution.

What I consider is a safe coordinate range allows for coordinate addition and subtraction without overflow. (Whenever multiplication is needed, which is infrequent, I've resorted to double conversion for safety.) Anyhow, with that limitation, I've documented ± 4e18 as the permitted range for coordinates.

I can't imagine any geometric calculations requiring the full precision provided by this library. Nevertheless implicitly from your question, how can we best safely and simply convert floating point values (± scaling) into this integer range? I arrived at a maximum decimal precision of ± 8 because I considered this a reasonable limitation for geometric calculations while still accommodating non-fractional values in the billions.

Anyhow I accept that, at a minimum, this should be properly explained in the documentation.

Moreover, I would propose to add an additional ClipperD(double scale = 1.0)

I'd need a lot of persuading. It seems that quite a few users still perform geometric operations using very small coordinates that are inappropriate WRT the library's integer rounding. This is mitigated by the default scaling (precision = 2). And I don't really understand the downside of this scaling. And I'd argue that for the majority of users, setting a decimal precision is simpler (more intuitive) than setting a scale as precision implies both scaling and descaling.

Moreover, I would propose to add an additional ClipperD(double scale = 1.0) and a convenience static method to determine a maximum base-2 exponent (actually a scale factor, to simplify the use) present in Path(s)D instances to properly calculate a scale for the ClipperD instance to allow as much precision of an original Fp data as possible.

And another step that will impact performance.

sergey-239 commented 1 year ago

What I consider is a safe coordinate range allows for coordinate addition and subtraction without overflow.

Yes, this is easily achievable, we have a plenty of space: we have only 53 significant bits in a double, while in64 can hold 64.

And I don't really understand the downside of this scaling.

I am not against scaling and conversion to integers and back at all. The divisor in use is not a good one: 5 is an odd number.

And another step that will impact performance.

yes, that's why in the POC (#295) I decided to track coordinates in PointD() instead of examining Path(s)D afterwards: in the PointD() the coordinates are already in registers so an extra check for a value would not incur too much overhead.

AngusJohnson commented 1 year ago

I am not against scaling and conversion to integers and back at all. The divisor in use is not a good one: 5 is an odd number.

I don't understand where you're getting 5. The library currently permits scaling with precision up to 8 deciimals.

sergey-239 commented 1 year ago

I don't understand where you're getting 5. The library currently permits scaling with precision up to 8 deciimals.

10=5*2, 1/5 is a periodic fraction in base-2 notation.

Look at the following code:

int main() {
    double scale = 100;
    double invscale = 1 / scale;
    double d = 0;
    for (int i=0; i < 50; i++) {
        int64_t n = d*scale;
        n+=4;
        d=n*invscale;
    }
    std::cout << d << std::endl;
}

The (un)expected output is 1.99. So, the accumulated error is 0.01. When we scale with power of 2 we would have no this error. We can't predict how an average user would use your library, but recently we had a question form one about iterative calling Minkovski diff for more than 100 times...

AngusJohnson commented 1 year ago

How about this for a solution - increase the scale derived from precision to the nearest power of 2:

  int precision = 2;
  double scale = exp2(ceil(log2(pow(10, precision))));
  double invscale = 1.0 / scale;
sergey-239 commented 1 year ago

Yes, it could be defined this way. However, there is a bit faster solution:

scale = std::pow((double)std::numeric_limits<double>::radix, std::ilogb(std::pow(10,precision))+1); on most modern machines the FP radix is equal 2, but for portability its better to use num limits value;