Closed invpe closed 2 years ago
There are a few things that make it different.
First, real_t
is defined as float
as default in MY-BASIC (you could change it to double
).
Second, float
and double
evaluate differently, i.e:
float a = 123.0 / 321.0 * 321.321123; // Calculated in `double`, narrowed into `float`.
float b = 123.0f / 321.0f * 321.321123f; // Calculated in `float`.
printf("%f\n%f\n", a, b);
results to:
123.123047
123.123055
Third, MY-BASIC uses "%g" as default to output real_t
(you could change it also).
Appreciate these details, thanks v much.
Reopening as i will be commencing further testing, still don't understand however why mybasic without any changes spits out different results on different architectures, even though a C++ execution returns them consistent.
Closing, issue found to be MB_MANUAL_REAL_FORMATTING enabled in different architecture.
Hello, i just did a small test on three separate architectures running a simple BAS script.
First the C++ code:
Results:
Then i turned to run exactly the same operation with using mybasic, the script is as follows:
Results:
Why is there a discrepancy in the results, what can cause that? Looks like rounding (123.123047 to 123.12305) is happening.