CrowCpp / Crow

A Fast and Easy to use microframework for the web.
https://crowcpp.org
Other
3.17k stars 350 forks source link

Add define to set significant digits when writing json floats and dou… #891

Closed t-cadet closed 3 weeks ago

t-cadet commented 3 weeks ago

Add define to set significant digits when writing json floats and doubles

Usecase

Issue

I use snapshot testing to compare the JSON output of my service to an expected JSON output file. At the moment a slight change in the computations of the service causes the last decimals of floating-point numbers to change, the diffs then have to be manually validated which is time-consuming. The output is also not very readable because there are way too many decimals.

Solution

This commit allows users of Crow to choose the number of significant digits they want in serialized floating-points. This enables more readable output and more robust JSON diffs. 6 significant digits seems like a reasonable default.

Example

Before

      "max": {
        "real": 0.0416054263979028227105,
        "imag": 0.0311738103389676488031
      },
      "min": {
        "real": 0.0301375840886710585909,
        "imag": 0.030686107921196179027
      }

After

      "max": {
        "real": 0.0416054,
        "imag": 0.0311738
      },
      "min": {
        "real": 0.0301376,
        "imag": 0.0306861
      }
gittiver commented 3 weeks ago

actually we have a solution which is: a) portable b) provides the maximal possible precision with double values

Therefore I would prefer not to add this custom specific assumption about precision into the library. Actually there is not even an issue requesting this.

t-cadet commented 3 weeks ago

I opened an issue to request this: https://github.com/CrowCpp/Crow/issues/894

I think the above implementation can default to the current behavior by changing the default #defines to:

#ifndef CROW_JSON_FLOAT_PRECISION
#define CROW_JSON_FLOAT_PRECISION 6
#endif
#ifndef CROW_JSON_DOUBLE_PRECISION
#define CROW_JSON_DOUBLE_PRECISION DECIMAL_DIG
#endif

That way it is both portable and provides maximal precision for doubles by default, but also allows users to customize the format if they need it, and the responsibility is on users to make sure their custom precision does not break things.

Another idea would be to allow users to provide the whole format string as a define to make it even more flexible (choose between f, F, g, G, e, E formatters)