Closed BrianRosamilia closed 1 year ago
Hey, thanks for bringing this to my attention; Yeah, this is definitely a bug.
The Nth root method was implemented based on the fact that the Nth root of a fraction can be calculated by taking the Nth root of the numerator divided by the Nth root of the denominator, and I'm also using a BigInteger-based Nth root method to implement those Nth root(s) on the numerator and denominator, which is an integer-only Nth root implementation, and accounts for the truncation of the Nth root method to the nearest integer.
Maybe this approach would be acceptable for large values in the numerator and the denominator, but may be totally inadequate when the fraction is N over 1 and N is small and an integer. What is probably missing here, is a parameter allowing the caller to specify a minimum level of precision that they are wanting from the algorithm, as for the majority of cases, the result will not be a rational number, and the numerator and denominator would grow without bound trying to calculate the value exactly. And it is not clear what an appropriate level of precision would be for the general case.
Let me define some terminology here real quick: If one is taking the Nth root of some integer B, I will be referring to B as the base, N the root, and the result as the, uhh, result.
Now, continuing on: I thought this would be an easy fix, because my Nth root implementation has an overload that returns a remainder value. I had thought that dividing that remainder value by the base would recover the fractional value of the Nth root that was lost by working within the set of integers. This... is not true. This is true for the division remainder, which is what I was thinking of, but not for the Nth root remainder. I could provide an overload that returns an additional value, the remainder (It currently does not offer this). This 'remainder' would represent the the difference between the base and the closest perfect Nth power. This is equivalent to subtracting the result raised to the Nth power, from the base. This likely isn't what most people would want or expect, however.
So, this is going to require a bit more thought and research on my part. But I just wanted to let you know that I have seen your bug report, this is definitely some sub-optimal behavior, you weren't using the library wrong or anything, and I am looking into a fix for you.
Once again, this is probably going to require an extra parameter for the user to specify the desired level of precision. Ill probably have the user supply it as some tiny fractional value which must be larger than the absolute difference between returned result and the true Nth root value. Unless you can think of something better or more natural to use...?
I'd estimate between 1-3 days before I'll have some kind of resolution for this. Feel free to inquire into my progress if you don't hear anything from me within three days, its entirely possible that I become distracted by other development projects I have in flight and vying for my attention right now.
@AdamWhiteHat great, thanks for the reply.
Once again, this is probably going to require an extra parameter for the user to specify the desired level of precision. Ill probably have the user supply it as some tiny fractional value which must be larger than the absolute difference between returned result and the true Nth root value. Unless you can think of something better or more natural to use...?
I've never used it before, but I did some research on how mpmath in python does this and there is a global value you can set for # of decimal places for calculations like this.
https://mpmath.org/doc/current/basics.html (see: dps
)
https://mpmath.org/doc/current/functions/powers.html
Might be a little nicer looking to do it this way and then internally you'll just do .1^N
and return a value within .1^N
of the real value. Realistically, people aren't going to pass .0000006
as their precision right? So perhaps an equivalent to dps
is best.
An optional precision parameter could be useful for all of the methods in BigRational, particularly when using BigRational for iterative calculations that can blow out to ridiculously large internal BigInteger values after millions of iterations. A good solution to this issue might present itself down the track but a precision parameter would be acceptable.
Okay, after trying a few different approaches, I finally got NthRoot working properly to my satisfaction. This should be confirmed by the newly added tests, which includes your cubed root of 50 test, amongst others.
I have also pushed a new version of the library to NuGet that includes this fix: version number 2023.155.2020.
You can close this issue if you are satisfied, otherwise I will probably close this issue if I don't hear anything from you after about week or so.
As discussed, NthRoot now takes an (optional) precision parameter. If not specified, the default value of 30 is used (the approx. precision the the Decimal class gives you). This precision parameter specifies the minimum number of correct decimal digits the returned rational approximation should give.
This NthRoot algorithm works in almost exactly the same way that my arbitrary precision NthRoot method for integers does. The only difference is, instead of dividing the current range defined by the upper and lower bound by 2, and keeping the half which contains the correct answer, it finds the fractional mediant between the upper and lower bound. Also it sets the upper bound no lower than 1, in the case of finding the Nth root of values less than 1. ...which is algorithmically very simple and perhaps a bit obvious in hind sight. But I tried a Newton's-method-like algorithm and then a continued fraction approach first. Both of these solutions had sub-optimal results for certain cases. For example, the continued fraction algorithm worked great for squared roots of any number. However the cubed roots of any number would result in rational numbers where the numerator and denominator were absolutely gigantic, even for precision values as small as 10 decimal places. Ultimately, this algorithmically un-sophisticated approach occurred to me late one night, took me no time to implement (compared to the other attempts), it ticks every box on my 'must haves' list and has proven to be far superior by every metric, which was a bit... frustrating? I guess? Anyways, I'm just glad I found something that works and returns denominators that arn't the size of a small novel.
Closing this issue now.
Description
BigRational.NthRoot(new BigRational(50), 3);
returns exactly 3. I would expect that since this function takes a BigRational and not a BigInteger, that cutting off the decimal is a mistake? The
FractionalPart
shows a 0Numerator
and casting to any other floating point type is still an integer.Error/Exception Message Silent miscalculation
Expected Behavior 3.684031... or something indicating only whole numbers are supported.
Actual Behavior 3
Steps To Reproduce Evaluate this
BigRational.NthRoot(new BigRational(50), 3));
Additional context/Comments/Notes/Rants Thanks for creating this library. I hope I'm not missing something obvious :)