Open mpusz opened 1 year ago
Let me know if this is the right place to start this discussion, but I've been working on the "interoperability with other libraries" task. I'm now attempting to get the boost::math::optimization
functions working with units, so I have the following code:
#include <mp-units/systems/si/si.h>
#include <boost/math/optimization/random_search.hpp>
using namespace mp_units;
using namespace mp_units::si::unit_symbols;
using boost::math::optimization::random_search;
using boost::math::optimization::random_search_parameters;
auto sphere(std::vector<decltype(1.0*m)> const &v) {
auto r = 0.0*m*m;
for (auto x : v) {
r += x * x;
}
return r;
}
using ArgType = std::vector<decltype(1.0*m)>;
int main()
{
auto rs_params = random_search_parameters<ArgType>();
rs_params.lower_bounds.resize(4, -1.0*m);
rs_params.upper_bounds.resize(4, 1.0*m);
rs_params.max_function_calls = 100000;
rs_params.threads = 2;
std::mt19937_64 gen(56789);
auto local_minima = random_search(sphere, rs_params, gen);
}
Now, I have altered the boost code so that it creates a dimensionless type from the container type via:
using DimensionlessFloat = decltype(Real()/Real());
where in this context Real
is the mp-units type. However, I am unable to instantiate a uniform_real_distribution on this type, as:
In file included from ~/optimization_plus_units/main.cpp:2:
/math/include/boost/math/optimization/random_search.hpp:108:62: fatal error: no matching conversion for static_cast from 'int' to 'DimensionlessFloat' (aka 'quantity<mp_units::one{}, double>')
uniform_real_distribution<DimensionlessFloat> unif01(static_cast<DimensionlessFloat>(0), static_cast<DimensionlessFloat>(1));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So at least for the dimensionless types I think it's unambiguous that supporting construction from integers/floats must be possible correct?
@NAThompson, it is not the case for now. There are no implicit conversions from plain numbers to a dimensionless quantity. We had it in 0.8 but removed it for 2.0 to simplify the design. We might restore this functionality at some point in the future, though. We are collecting convincing use cases for this...
Please note that dimensionless quantities still have units and some of them might be different than one
(e.g., percent). This is why, for now, we have to type 42 * one
to create a dimensionless quantity.
Ok, cool. I think I'm beginning to see the mismatch between my expectations and the design. To wit, this library seems to have the goal of extracting the maximal possible correctness verification from the technology of dimensioned types. Yet my goal as a library author is maximal generic compatibility with some additional correctness verification, e.g. I want my users to be able to drop in a dimensioned type into my templated library code and have things "just work", without my code having to known anything about mp-units
. But it seems this is a non-goal of this library?
Safety, performance, and user-readability of error messages will always be a priority of this and most other similar libraries.
We can discuss specific things, though. For example, we can decide that we can allow implicit or explicit conversions from the raw value to a dimensionless quantity with a unit one
.
We can discuss specific things, though. For example, we can decide that we can allow implicit or explicit conversions from the raw value to a dimensionless quantity with a unit
one
.
Agreed. @NAThompson, if you'd like some background reading on why the dimensionless unit "one" is special when it comes to implicit conversions, you can check out the "Other dimensionless units and implicit conversions" section of this doc. You may find the whole page interesting, too. (Note that the doc page is for a different units library, just to prevent confusion, but the page is talking about generic concepts.)
Please note that dimensionless quantities still have units and some of them might be different than one (e.g., percent).
I'm now beginning to think this is wrong-although I concede that a clear definition of "dimensionless" is required to make such a claim. But one of the key properties of dimensionless numbers is that you can form power series with them and pass them to transcendental functions. But you can't do that with a percentage (e.g., 5% + (5%)^2 makes no sense, but 0.05 + (0.05^2) does). So if percentages and degrees are included in the definition of "dimensionless", then one of the greatest conceptual utilities of dimensionless numbers has been removed from it.
According to the official definition, dimensionless
quantity (or, to be more precise, quantity of dimension one
) is the one for which all the exponents of the factors corresponding to the base quantities in its quantity dimension are zero.
The properties you expect are valid only for a "unitless" quantity, so a dimensionless quantity with the unit one
.
I'm now beginning to think this is wrong-although I concede that a clear definition of "dimensionless" is required to make such a claim. But one of the key properties of dimensionless numbers is that you can form power series with them and pass them to transcendental functions. But you can't do that with a percentage (e.g., 5% + (5%)^2 makes no sense, but 0.05 + (0.05^2) does). So if percentages and degrees are included in the definition of "dimensionless", then one of the greatest conceptual utilities of dimensionless numbers has been removed from it.
I've thought about this a decent amount, especially in the context of the claim that angles are "intrinsically" dimensionless. The most satisfying approach I could come up with was to define "dimensionless" as "not dimensioned", and focus on the definition of "dimensioned".
I think the signature of dimension is the inability to compare a product with its original value. That is, if you have any quantity q
, and another quantity d
, then d
is "dimensioned" if and only if q
cannot be meaningfully compared to q * d
.
Consider this a "working hypothesis", a contribution to the broader conversation.
If we accept this viewpoint, then percent is dimensionless. 75% of any length is unambiguously smaller than the original length, and 125% is unambiguously larger.
In the case of your power series, yes you can compute with a percent, simply by following all of the usual rules for computing with units. (For simplicity, we'll look at only the first two terms, but the same procedure could handle higher powers as well.)
First, we need to notice that the product of percent and percent is not percent, but pertenk --- "per ten K", per 10,000. To add different units, we simply follow the usual procedure of converting them to their common unit, which is the largest unit that evenly divides both. Here's the computation, in C++-like code for clarity:
percent(5) + squared(percent(5))
= percent(5) + pertenk(25)
= pertenk(500) + pertenk(25)
= pertenk(525)
This is, of course, equivalent to percent(5.25)
, or even to the pure number 0.0525
.
While the unit conversions may be quite tedious for an infinite series, it should be clear enough that there's no ambiguity in the result.
The properties you expect are valid only for a "unitless" quantity, so a dimensionless quantity with the unit one.
Excellent; with clear definitions we can make progress. So to clarify: Should we be able to instantiate a unitless quantity from ints or floats?
Here's my opinion.
Yes, we should be able to instantiate a unitless quantity from a raw numeric value of the same type.
Yes, this should include implicit conversions. It's perfectly meaningful and very safe to pass a raw number to an API expecting a "quantity of unit one".
All that said, my experience is that this will still have just a little more friction than we'd prefer. There will be a lot of situations where we'd like it to simply "feel like" a number, but the implicit conversion (in either direction) won't trigger. Many users will expect that, say, a frequency in hertz times a duration in seconds will simply be a raw number, and sometimes it will feel that way, but sometimes it won't.
One solution is to make it a raw number. This is the current approach in my library, Au. I think it was also the former approach in mp-units. The problem is that this makes the library harder to reason about, by adding a special case. I've now come to the conclusion that this was a mistake on my part, and mp-units has it right. aurora-opensource/au#185 is tracking this (breaking) change in that library, and there's a little more discussion there.
Anyway... yes, I think even the implicit conversions are still a good idea, but at a minimum I think we should permit explicit conversions.
I had to speed up refactoring, design, and implementation of the V2 design, and because of it plenty of unit tests are missing now.
Any help would be appreciated.
Here is the list of features not covered with unit tests:
DimensionOf
andQuantitySpecOf
in _concepttest.cppMagnitude
testsquantity_point
tests for equality, ordering,common_type
,value_cast
,quantity_cast
,in
,force_in
, derivedquantity_point