mpusz / mp-units

The quantities and units library for C++
https://mpusz.github.io/mp-units/
MIT License
1.1k stars 90 forks source link

Raise coherent quantity to fractional power doesnt give correct result. #71

Closed kwikius closed 2 years ago

kwikius commented 4 years ago

Raise coherent quantity to fractional power doesnt give correct result. Personally I am not so bothered about incoherent quantities but for coherent quantities this can be done better.

Currently doesnt give the correct answer:

https://godbolt.org/z/6bgkEK

For coherent quantities the multiplier of the result should remain as one ,and the exponent should be adjusted ( so for square root divide exponent by 2, and in fact for pow\(q) the result exponent is simply multiply the exponent by R ( so for square root multiply by 1/2, for cube 1/3, for pow<3/2> multiply by 3/2. If the exponent is a rational number then it can represent the quantity raised to fractional power correctly and exactly .

oschonrock commented 4 years ago

yup confirmed.

Totally understand why that's happening. Runtime value and compile time ratio are being square rooted separately. Both "correctly" except for the obvious isqrt "truncation"

It's not immediately obvious to me how to fix that...

Well, unless we just turn it into a double and call sqrt()

@kwikius the ratio square root does all that halving of the exp etc... but it "makes the exp even first".. and that's why 1/10 => 1/3 ...

Changing ratio to make the exp yet another ratio, is clearly a possible solution. Although for me, at that stage we are really quickly approaching the point where all conversion ratios should be done with doubles.

Particle physics was not meant to be solved with integers.

kwikius commented 4 years ago

Regarding using floating point types in the ratio, there is the great problem of them being inexact. Since ultimately the ratio is part of the type system, life gets much harder if types that are meant to be the same dont quite match. We can accept lack of precision at runtime, when dealing with values, but it gets pretty ugly when its in the type system (This has always been the argument against floating point template parameters I think). The beauty of rationals is that they are exact ( unitil they overflow, but that results in a failed compilation an no runnable program, not an incorrect runnable program)

#include <cmath>
#include <cassert>

int main()
{
  double v1 = 10;
  double v2 = std::pow(v1,1./3);
  double v3 = std::pow(v2,3);

  //### most likely will assert ###
  assert(v3 == v1);
  //###################
}

Because the exponent is an integer in the current mpusz implementation, you have to modify the multiplier ( Num, Den template param) to try to get the square root, but that causes problems. 1) If the multiplier is not equal to 1 then the result quantity is not a coherent quantity. That is really not acceptable since coherent quantities are just simply superior, else why would the si system be dominant! 2) Taking the square root of a rational can either recurse indefinitely or be inexact https://github.com/mpusz/units/blob/master/src/include/units/ratio.h#L208

Probably should be static_asserting( root %2 ==0) there! 3) Ideally we want to use other rational powers e.g q ^(1/3) ( e.g for volume --> length) etc , and things will rapidly get worse...

Therefore I would suggest making the exponent a rational and using that to take the root of the conversion factor. This works perfectly for coherent quantities ( N==1, D==1) \:

// C++ >= 14?
#include <ratio>
#include <cmath>
#include <type_traits>
#include <cassert>
#include <iostream>

/*
Simple type using compiletime exponent and showing fun raising to rational power
 runtime value to some power of 10
*/
template <typename ExpRatio>
struct value_exp{
   explicit value_exp(double const & v)
   : value{v}{}

   double value;
   //get the underlying numeric value
   double eval() const { return value * std::pow(10,static_cast<double>(ExpRatio::num)/ExpRatio::den);}
};

// power and roots function ( squre, sqrt, etc etc etc)
template <int N, int D,typename ExpRatio>
auto  to_power(value_exp<ExpRatio> const & v)
{
    typedef value_exp<
      typename std::ratio_multiply< 
         std::ratio<N,D>,ExpRatio
      >::type
     > result_type;
    return result_type{std::pow(v.value, static_cast<double>(N)/D)};
}

template <typename ExpRatio>
std::ostream & operator << (std::ostream & os, value_exp<ExpRatio> const & v)
{
   return os << v. value << "^(" << ExpRatio::num << "/" << ExpRatio::den << ")" << "{==" << v.eval() << "}" ;
}

int main()
{
   value_exp<std::ratio<-1> > q1{100};  // eg decimetre

   std::cout << "q1 = " << q1 << '\n'; 

   auto q2 = to_power<1,3>(q1);    // eg decimetre ^(1/2)

   static_assert(std::is_same<decltype(q2),value_exp<std::ratio<-1,3> > >::value,"");

   std::cout << "q2 = " <<  q2 <<'\n';

   auto q3 = to_power<3,1>(q2);

   static_assert(std::is_same<decltype(q3),decltype(q1) >::value,"");

   std::cout << "q3 = "<< q3 <<'\n';

}
/*
output:
q1 = 100^(-1/1){==10}
q2 = 4.64159^(-1/3){==2.15443}
q3 = 100^(-1/1){==10}

*/

Of course that doesnt solve the problem for values whose multiplier is not 1. And at this point I refer back to https://github.com/mpusz/units/pull/55#issue-362158248

In my own quan library I solve the issue of coherent quantities by the rational exponent and incoherent quantities by (except in certain cases) converting to coherent quantities before doing any useful calcs.

It is good to get some more examples like https://github.com/mpusz/units/blob/master/example/total_energy.cpp since with more and more examples it should become clear that once you move away from toy examples that si units rightly dominate, https://github.com/mpusz/units/blob/master/example/total_energy.cpp#L23 ( Only si units there)

and therefore the problem of converting incoherent quantities for serious work is of very minor importance, though historically in newsgroups and forums among the twiterati it takes a huge amount of oxygen from other parts of the library. So what if it is slow, or inexact, it is only every going to be used for HMI anyway. No need for high speed and usually formatted toa fixed number of decimals anyway...

mpusz commented 3 years ago

This issue still persists in the latest version. Updated Compiler Explorer link: https://godbolt.org/z/fx56deq7x.

It would be great if someone with a strong mathematical background could look into it and help resolve it as it is with us for way too long now 😉

kwikius commented 3 years ago

Hi @mpusz

The problem is in use of integer exponent in the mpu units ratio type https://github.com/mpusz/units/blob/master/src/core/include/units/ratio.h#L53

Replace it with compile time rational exponent and then it will work as explained above.

I added examples on my libraries showing this in action

in pqs:

https://github.com/kwikius/pqs/blob/master/examples/raise_quantity_to_fractional_power.cpp

q1 = 100 dm q2 ( square root of q1) = 10 m½⋅1⋅10⁻½ q3 (q2^2 should be same as q1) = 100 dm

https://github.com/kwikius/pqs/blob/master/examples/raise_quantity_to_fractional_power_imp.cpp q1 = 100 ƒƫ q2 ( square root of q1) = 10 ƒƫ½ q3 (q2^2 should be same as q1) = 100 ƒƫ

in quan: https://github.com/kwikius/quan-trunk/blob/master/quan_matters/examples/raise_quantity_to_fractional_power.cpp q1 = 100 dm q2 ( square root of q1) = 10 [m+1/2 * 1e(-1/2)] q3 (q2^2 should be same as q1) = 100 dm

mpusz commented 3 years ago

Thanks, @kwikius! Right now I am fighting with #281 and then I plan to address other things for mp-units 0.8.0 #239, #134, #211, and possibly #99. This will take quite a lot of time. I hope that in the meantime someone will pick this up. If not, I will look into it when I am done with other major refactorings.

chiphogg commented 3 years ago

I am glad I stumbled on this issue! I was just about to post my own, based on the following surprising example:

constexpr auto x = (1.0 * m) * (1.0 * km);
const decltype(x) x_roundtrip = sqrt(x) * sqrt(x);
std::cout << "Expect 1; got " << x_roundtrip.number();

This produces 0.9 for a 10% error.

Big picture

I recommend disabling roots of imperfect powers immediately, via static_assert. There is no point in trying to "fix" the current implementation, because it uses ratios for unit magnitudes, and we will see that ratios do not fulfill the most basic and natural requirements for a magnitude representation.

What are the requirements for a magnitude representation?

To start, a basic fact of quantity calculus is that units are closed under products and rational powers. Conceptually, a unit has two constituent parts: a dimension and a magnitude. To compute products and rational powers of units, we apply these operations separately to their dimension and magnitude parts. Thus, our magnitude representation should satisfy these requirements:

  1. Support exact arithmetic for all products and rational powers.
  2. Provide a unique representation for each magnitude (i.e., each positive real number).

I think the existing arguments against floating points suffice. As for ratios, they are rather infamously not closed under rational powers. Thus, neither of these approaches can be a viable solution for representing magnitudes.

Solution: vector space representation

We can solve this problem robustly by introducing a vector space representation for magnitudes. This is the same strategy we already use for dimensions, so it should share the same strengths and weaknesses: they should generally work very well together!

The question then becomes what basis vectors to use. As I explained in #195, the prime numbers form an excellent start for our basis, because they are linearly independent. Immediately, we find we can still represent all ratios, but they are automatically in lowest terms. We also get all the radicals---if a unit has magnitude sqrt(2), we can represent it exactly, and correctly reason that we'll get 2 when we square it!

What's more, this also gives robust support for non-SI units. @kwikius identifies this problem above, but proposes to "solve" it by declaring non-SI units unimportant. The rational exponent approach is a step in the right direction, but it only applies to rational powers of 10. By contrast, the vector space representation handles every case which the above value_exp approach does, but also handles arbitrary other magnitudes with ease.

Benefits of vector space representation

Implementation and experience

We have been using this representation in the Aurora units library from the beginning. It hasn't caused any problems yet; it basically just works and gets out of the way.

One downside is that the basis factor representation makes type names longer (for example, in compiler errors). But mp-units has already solved this problem! The elegant downcasting mechanism should prevent the magnitude type lists from bothering end users.

Until recently, I had thought we were the first to invent this representation. But I did some digging, and the benri library came up with it before we did. Congratulations @jansende on this beautiful implementation! Unless I'm much mistaken, it makes your library the first units library whose magnitude representation fulfills the basic natural requirements.

kwikius commented 3 years ago

I think what you are saying is great. Thanks for solving this issue

mpusz commented 3 years ago

@chiphogg Thanks for raising this! For the upcoming month I will not have time to do anything in mp-units but after that, hopefully, I will finally be able to work on those features. If you would like to contribute in some way please do so as you already have some implementation experience in this subject. Also, we can meet online and discuss this if you like? We would really appreciate some help here.

chiphogg commented 3 years ago

Works for me: I'm also swamped this month, and probably for much the same reason (CppCon). 🙂 We can figure out a rough plan at our leisure before then, with an eye towards starting work next month.

mpusz commented 3 years ago

Cool! 😃

kwikius commented 3 years ago

To start, a basic fact of quantity calculus is that units are closed under products and rational powers. Conceptually, a unit has two constituent parts: a dimension and a magnitude.

hmm . This is a question of definitions. In my understanding of magnitude, the magnitude of a physical quantity only make sense in the context of the unit it is expressed in. Hence 1 m : magnitude =1 unit = m, so magnitude is not part of the unit A unit exists within a measurement system and has a dimension and a conversion factor which relates it to the base units of the system. A base unit is simply a unit whose dimension is a base_dimension and whose conversion factor is 1 based on some physical phenomenon, which will be defined as part of the measurement system. https://www.nist.gov/pml/weights-and-measures/metric-si/si-units https://en.wikipedia.org/wiki/Conversion_of_units

Nevertheless Quantities, units and their constituent parts should be expressed in terms of concepts, which would allow dimensions and conversion factors to be implemented in many different ways within your own unit system.

Now in my unit system I can define the semantics so that incoherent units are converted before use, and you can do the opposite if you wish. You could use primes, though it seems obscure, you could use symbolic computation of types, carrying the operations in the type as mpusz units I think does in the "unknown unit". You can in theory, but, as you point out above, compile times soon become unacceptable and error messages unreadable. Last time I checked the compile times for mpusz/units were around 5 times those of my pqs library. Everything has a cost.

https://github.com/kwikius/pqs/wiki https://github.com/kwikius/pqs/wiki/concept-unit

The Aurora library would be interesting to see the source, but I assume like the "Bosch" units library, it is not open source? If so not really easy to discuss

kwikius commented 3 years ago

What's more, this also gives robust support for non-SI units. @kwikius identifies this problem above, but proposes to "solve" it by declaring non-SI units unimportant. The rational exponent approach is a step in the right direction, but it only applies to rational powers of 10. By contrast, the vector space representation handles every case which the above value_exp approach does, but also handles arbitrary other magnitudes with ease.

Again the implication here is wrong

I don't believe that non si units are unimportant, but the whole point of the SI system and its coherent units, is that by using coherent units, you make your life a whole lot easier. non si units are essentially regressive for most everyday use. Therefore I see them as legacy units and so my preffered semantic convert them to coherent units before any calculation.

But here is the thing ... In pqs I have implemented that semantic for si units and another semantic for imperial units, so if you really need another semantic, my library allows you to express that. I also hope one day to show a logarithmic represntation for natural units, which allows all units to be coherent. Again my library allows to express that other semantic.

chiphogg commented 3 years ago

First of all: if I've misrepresented your position, I'm truly sorry, and I'm grateful to you for pointing it out. Thanks for clarifying!

You've raised some great points, and I'd like to reply to them one at a time.

hmm . This is a question of definitions. In my understanding of magnitude, the magnitude of a physical quantity only make sense in the context of the unit it is expressed in. Hence 1 m : magnitude =1 unit = m, so magnitude is not part of the unit A unit exists within a measurement system and has a dimension and a conversion factor which relates it to the base units of the system. A base unit is simply a unit whose dimension is a base_dimension and whose conversion factor is 1 based on some physical phenomenon, which will be defined as part of the measurement system. https://www.nist.gov/pml/weights-and-measures/metric-si/si-units https://en.wikipedia.org/wiki/Conversion_of_units

Yes, I rather glossed over this in my earlier post; let me elaborate here.

To begin: a clarification. The Magnitude I'm referring to is not (directly) a property of a Quantity. It's a property of a Unit. So when you say that for 1 m the Magnitude is 1, I believe what you're referring to is the Value.

In my understanding:

Now, this "Magnitude" is not uniquely defined for a Unit; we could multiply, say, all length Units by any positive real number without any observable effect. Rather, what is physically meaningful is the ratio of the Magnitudes of two Units (of the same Dimension). I believe this ratio-of-Magnitudes corresponds to the "conversion factor" mentioned in the links you gave above---except that it's defined for any two Units of the same Dimension. One can single out certain units and call them "base units", but I don't believe this is strictly necessary; I can't think of any observable effect.

Of course, when we implement Units in software, we'll need to give each Unit a concrete Magnitude, but that's a pure implementation detail, and not part of its interface. I think a good way to see the distinction is that we should never expect to see a unit test mention "the Magnitude" of a Unit; rather, unit tests should only assert against their ratios.

Now in my unit system I can define the semantics so that incoherent units are converted before use, and you can do the opposite if you wish. You could use primes, though it seems obscure, you could use symbolic computation of types, carrying the operations in the type as mpusz units I think does in the "unknown unit". You can in theory, but, as you point out above, compile times soon become unacceptable and error messages unreadable. Last time I checked the compile times for mpusz/units were around 5 times those of my pqs library. Everything has a cost.

I'm not sure why prime numbers are "obscure". If you use a vector space representation for Magnitudes (and I know of no alternative which satisfies the basic requirements of quantity calculus!), then what alternative set of basis vectors would you suggest?

As to compile times: I didn't actually point out that they soon become unacceptable. I didn't say anything about them here, but elsewhere I mentioned that they were "not subjectively noticeable" even when we concocted "adversarial" Magnitudes with far more basis vectors than could reasonably arise naturally. So, we have already performed the measurement, and we know that the compile time cost of this approach is small.

It's true that the type names become more complicated in error messages, but downcasting should prevent the vast majority of "raw Magnitude" types from showing up in error messages, so I don't think this is a big deal.

The Aurora library would be interesting to see the source, but I assume like the "Bosch" units library, it is not open source? If so not really easy to discuss

Unfortunately, you're correct: our library is not currently open source, and we have no plans at present to change that. However, I recently discovered that the open-source benri library has already implemented a vector space representation for their Magnitudes, which they call Prefixes (impl/prefix.h, si/prefixes.h). Our implementations differ in several respects (for example, we don't compute prime numbers to get the prime factorization, as this is unnecessary), but it looks like they got all the essential ideas a couple years ago---really impressive stuff!

If you want to play with a working implementation of the vector space representation, you can check out benri.

I don't believe that non si units are unimportant, but the whole point of the SI system and its coherent units, is that by using coherent units, you make your life a whole lot easier. non si units are essentially regressive for most everyday use. Therefore I see them as legacy units and so my preffered semantic convert them to coherent units before any calculation.

That's a fine choice---I took the same approach with my first units library (for Uber ATG). It can certainly simplify some implementation details.

But if we can support arbitrary units natively, without regressing support for SI units---well, why wouldn't we?

I think a good goal for a units library is to end up with the same binary executable we would have had without it, but with the effort of unit-checking borne by the compiler. Part of this means that when we put a value into a units library type, we should be able to get the same value out. If we eagerly convert to SI---again, a fine approach; I've done it!---we'll get a very similar value out, but it'll be subtly different. Maybe it won't cause any problems; maybe it will.

But if we can treat all units on an equal footing [2], and trust the library to handle the unit-checking---well, why wouldn't we?


[1] Yes, mp-units defines Dimensions in terms of Units, and gives good reasons for doing so. In this comment, I'm restricting the discussion to the case where we have an agreed-upon set of base dimensions. This doesn't really affect the main issue at hand, which is the necessity for a vector space representation of what I'm calling the Magnitude of a Unit. [2] Alternatively, "an equal (12 * 2.54 cm)ing".

kwikius commented 3 years ago

Now, this "Magnitude" is not uniquely defined for a Unit; we could multiply, say, all length Units by any positive real number without any observable effect. Rather, what is physically meaningful is the ratio of the Magnitudes of two Units (of the same Dimension). I believe this ratio-of-Magnitudes corresponds to the "conversion factor" mentioned in the links you gave above---except that it's defined for any two Units of the same Dimension. One can single out certain units and call them "base units", but I don't believe this is strictly necessary; I can't think of any observable effect.

The observable effect is the measurement ( which is defined and documented by the measurement system)

"except that it's defined for any two Units of the same Dimension"

A story

Bill asks Jill for a 1 hand * 1 hand square of plywood.

Jill cuts the ply to size and gives it to Bill.

Bill takes it home but is soon back. I want my money back. This ply is too big .

Jill takes the ply and measures it using her hand.

No refund sorry. 1 of my hands exactly, she says. Wow says Bill, she sure has big hands!


You need to be in the framework of a single agreed measurement system otherwise there is no way to know the "magnitude" of a unit or the ratio of the "magnitudes" between two units. The measurements system's primary role is to define as precisely as possible how to make that measurement in terms of some physical phenomena.

(Both mpusz-units and Benri appear to base their systems in the SI, but that is not any more "absolute" than any other measurements system)

Therefore there is no absolute ratio between 2 units. A unit only exists within a particular measurement system

https://github.com/kwikius/pqs/wiki/concept-measurement_system

Does this mean that you can't have lengths in feet in the SI system? No but you need to specify themselves in terms of a conversion factor to meters.

https://github.com/kwikius/pqs/blob/master/src/include/pqs/systems/si/length.hpp#L22

Whereas in my imperial measurement system, I specified feet as a base unit

https://github.com/kwikius/pqs/blob/master/src/include/pqs/systems/imperial/units/length_unit.hpp

In PQS can you add an imperial measurement system foot to an si measurement system foot? No not allowed. Operations only work on units within the same measurement system. ( Of course that doesnt preclude conversions between measurement systems)

Why shouldnt you be able to multiply feet by meters by inches and get a foot_meter_inch?

Because that is what we had before the S.I and it resulted in continually converting from foot_erg_seconds to dyne_speedo_wetwipes or whatever. Hence the much cited Mars lander experience

chiphogg commented 3 years ago

Thanks for the amusing story. :slightly_smiling_face: However, I don't understand what it has to do with the claim that "there is no absolute ratio between 2 units" (the story is about an ill-defined single unit). What's more, having read and pondered your post, I still don't think the claim is correct.

Take any two length units, in two completely arbitrary systems of measurement. We can make actual, physical rulers, marked off according to each set of units. We can sensibly compare these units: line up the rulers next to each other, and see which ruler's mark we reach first. We can sensibly add these units: line up the start of one ruler with the end of the other, and measure from the first ruler's start to the second ruler's end. Nothing here depends on anointing either unit as a "base unit", or on bringing in some third unit. That is what I mean when I say there is an absolute ratio between any two units: they each correspond to a physical quantity of the same dimension, and this is enough to define their relationship in principle.

Why shouldnt you be able to multiply feet by meters by inches and get a foot_meter_inch?

Because that is what we had before the S.I and it resulted in continually converting from foot_erg_seconds to dyne_speedo_wetwipes or whatever. Hence the much cited Mars lander experience

The lesson from the Mars Climate Orbiter is not "everyone should use a single system of units". The lesson is "we must robustly track the units of every quantity". A good units library with no privileged units would have prevented the disaster just as surely as waving a wand and having everyone use SI units---or, for that matter, having everyone use Imperial units!

The answer to "Why shouldnt you be able to multiply feet by meters by inches and get a foot_meter_inch?" is: well, you should. This is a perfectly well-defined unit of volume. And any units library which permits its users to create such a unit is better (in this respect) than any which does not.

kwikius commented 3 years ago

My statement "Therefore there is no absolute ratio between 2 units" is poorly written.

There is no absolute ratio between 2 units in different measurement systems. You can only have a ratio between units in the same measurement system. So the inch cannot be added to the meter except by defining both it and the meter in a single measurement system.

In the si measurement system example in pqs I define a type of unit called a unit conversion, which defines the conversion factor to convert the inch to the meter in the si system, which is of course a base_unit of the si system defined in that system by measuring a physical phenomenon.

Take any two length units, in two completely arbitrary systems of measurement. We can make actual, physical rulers, marked off according to each set of units. We can sensibly compare these units: line up the rulers next to each other, and see which ruler's mark we reach first.

Sure but your 2 rulers placed together( or better one single ruler) is essentially defining a single measurement system. The essential part of the measurement system is that it documents and tries defines its base quantities in terms of some physical phenomemon. Your stick is a simple example, and historically there have been reference measurement sticks. Thus the Bill and Jill hand example. The Si system is the most advanced attempt to use a single stick

We can sensibly add these units

hmm. Depends on your definition of what unit means. Unfortunately these terms have very different meanings to different people. I would say you can add 2 ( concrete) quantities, but adding 2 units has no meaning, but it is best to define these terms more precisely as I have tried to do. https://github.com/kwikius/pqs/wiki/concept-unit https://github.com/kwikius/pqs/wiki/concept-quantity

The answer to "Why shouldnt you be able to multiply feet by meters by inches and get a foot_meter_inch?" is: well, you should. This is a perfectly well-defined unit of volume. And any units library which permits its users to create such a unit is better (in this respect) than any which does not.

Of course you should be allowed to do as you wish, but I should also be allowed to disallow that approach and enforce a particular set of units , I created the imperial example system in pqs to demonstrate the "libertarian" approach . https://github.com/kwikius/pqs/tree/master/src/include/pqs/systems/imperial. The library uses olde world font because it is basically a joke ;)

mrberman87 commented 3 years ago

@kwikius,

I think the confusion of "depends on your definition of what unit means" is something that groups like NIST have been working on. For this example of units of length, take a look at https://www.nist.gov/si-redefinition/meter and read the history of this struggle. As far as how we represent this in software and convert units between systems, that is our struggle.

As for conversions between systems, everything can have a conversion factor for similar type base units, like units of length. For this example, I always use the factor of 2.54 cm per inch. This conversion ratio is the key to convert and compare units in metric and imperial systems. From here, any unit of length in those two systems can be compared.

kwikius commented 3 years ago

@mrbermann Your conversion factor is in fact an SI conversion factor https://www.nist.gov/pml/special-publication-811/nist-guide-si-appendix-b-conversion-factors/nist-guide-si-appendix-b8 , so both your units are indeed defined in a single measurement system.

Even in the SI ,ambiguities can arise though if you stray outside the units recommended for use by the S.I. for example don't assume that 1 foot is always equal to 12 inches: https://www.nist.gov/pml/us-surveyfoot

https://everydayastronaut.com/mars-climate-orbiter/

kwikius commented 3 years ago

@mrbermann The statement add 2 units has no meaning to me since In my definition the unit is merely an annotation to tell us what the numerical value applied to a physical quantity means.

Let quantity q_a be a numerical value v_a in units u_a [v_a, u_a] Let quantity q_b be a numerical value v_b in units u_b [v_b, u_b]

u_a and u_b are of course defined in the same measurement system!

To "add 2 units" literally suggests that we say that the result q_r might be = [v_a + v_b, u_a + u_b] which is clearly wrong!, so we cannot "add 2 units" in the literal sense.

"Ah but that is not what I meant!"

No so whatever is meant by "add 2 units" is something more complicated and needs to be defined. It might be obvious to you, but not to me.

In pqs I do indeed have a function that could be called "add 2 units", but its purpose is semantic, to return the unit of the result of addition of q_a and q_b. In an addition calculation both q_a and q_b are converted to the result unit and hence the converted numerical values can be added together. Incidentally before the inevitable protests I emphasise that you can provide your own preferred semantic for your own measurement system by customising the structure.

https://github.com/kwikius/pqs/blob/master/src/include/pqs/concepts/quantity/plus.hpp#L30

kwikius commented 3 years ago

@mpusz, I would suggest closing this and for @chiphogg to open another issue or discussion etc with his proposal. I think I understand the basics of what his proposal is about and I think it would be interesting but it is not really related to this issue.

chiphogg commented 3 years ago

I like the idea of a separate issue; I filed #300 for this.

I don't think it makes sense to close this present issue while the bug is still present. If we wanted to close it in the near term, I think we'd need to add the static_assert at a minimum. Alternatively, we can just keep it open for a month or so while we wait for it to be easy to fix robustly.

chiphogg commented 2 years ago

I wrote this test case just now, to check and see if the above bug is fixed:

constexpr auto x = (1.0_q_m) * (1.0_q_km);
const decltype(x) x_roundtrip = sqrt(x) * sqrt(x);
REQUIRE(x_roundtrip.number() == 1.0);

It passes!

I vaguely recall deleting the sketchy code that tries to compute a "reasonable rational approximation" when we take square roots, so this result was what I expected.

Perhaps this issue is now fixed? Are there any other tests people want to perform before closing it?

kwikius commented 2 years ago

Glad that this is fixed. Happy for it to be closed again. Ideally you could include try raising quantities to some fractional powers with a few more digits to see if you have a reasonable headroom for intermediate results of before the system breaks. e.g. https://github.com/kwikius/quan-trunk/blob/master/quan_matters/examples/high_power_quantities.cpp#L43

chiphogg commented 2 years ago

Looks like this test passes:

const auto x = 1.0_q_m;
const auto x_to_the_201 = pow<201>(x);
const decltype(x) x_roundtrip = pow<1, 201>(x_to_the_201);
CHECK(x == x_roundtrip);
chiphogg commented 2 years ago

@mpusz, I think you can close this issue, unless there's anything else you can think of to test. (And it's one of the pinned issues for the repo, too, so that's nice!)

mpusz commented 2 years ago

Thanks!!!