Closed EvanBalster closed 7 years ago
Can you also post a snippet of code (or test case) that generates a difficult-to-read error, and the error itself (and compiler version)?
I think it will be easier to simplify the errors (and probably even get better results because no base units are returned) by just making the base units have an actual type instead of a typedef, but even that's easier said than done (e.g possibly a full library refactor).
I guarantee you making it more boost::units
-like or trying to force a unit system will make the library much less powerful and will worsen the error messages.
As to your suggestion... I don't know. I like the syntax, I agree that it would be much more readable. But take a look under the hood at something simple like base_unit_multiply
:
template<class, class> struct base_unit_multiply_impl;
template<class... Exponents1, class... Exponents2>
struct base_unit_multiply_impl<base_unit<Exponents1...>, base_unit<Exponents2...>> {
using type = base_unit<std::ratio_add<Exponents1, Exponents2>...>;
};
<class U1, class U2>
using base_unit_multiply = typename base_unit_multiply_impl<U1, U2>::type;
Notice that its basically a simple variadic wrapper around std::ratio_add
. Everything would have to be a lot more complicated from the ground up... I'm guessing a complete rewrite.
Actually your syntax looks a lot like a previous incarnation of units
which never really got off the ground due to this fatal flaw:
How do you determine the type equivalence? C++ will see the following two types as being incompatible even though they aren't (logically):
// velocity #1
units::base_unit<
units::dimension_whole<dimensions::length, 1>,
units::dimension_whole<dimensions::time, -1>>
// velocity #2
units::base_unit<
units::dimension_whole<dimensions::time, -1>,
units::dimension_whole<dimensions::length, 1>>
And that's for two parameters. You can't canonicalize the order because then you'd just end up with my library. The whole thing with the implied tag-dispatching seems off the top of my head like it would suffer badly from combinatorial explosion. Reference this SO question where I asked about basically the same thing. After a lot of thought and initial refusal to believe that I had to create canonical units, I eventually came around and was able to implement much more in the current library iteration than in the previous due to its design limitations. But were the error messages better before? Yes.
I'd definitely encourage you to try though if you have the inclination. You might have more luck than I did, and if you come up with a proof of concept that shows it can work, I'd be a lot more open to the refactor. As it is I think it would be going down a road I've already been on and abandoned.
Slightly off-topic, but possibly helpful is the new feature which might help you solve your dimensional analysis debugging headaches, which is that unknown unit types will now print dimensions to cout
. E.x.
meter_t a = // something that isn't a meter
// refactor to find the error
auto a = // something that isn't a meter
std::cout << a; // shows "1 m s^-1", oops, you have a velocity!
Sometimes I also use the trick
std::cout << decltype(a) << std::endl;
This also produces a compile error, but it tends to be much more readable.
Regarding the redundant types issue:
// velocity #1
units::base_unit<
units::dimension_whole<dimensions::length, 1>,
units::dimension_whole<dimensions::time, -1>>
// velocity #2
units::base_unit<
units::dimension_whole<dimensions::time, -1>,
units::dimension_whole<dimensions::length, 1>>
I nearly addressed this in my original remark, but thought better of it (mainly because of my very limited experience with variadics)... But it seems clear to me that there must be a canonical, well-defined order for all base dimensions. This could be achieved by associating each dimension with a unique integer constant, EG. 100 for meters, 200 for kilograms, 300 for seconds, et cetera. It's not an elegant solution, exactly, but something like that is a necessity. Empty space would leave room for user-defined base dimensions. Using something like a FourCC might also have its benefits for debugging and printing purposes.
It would have to be an error to define a unit which lists its dimensions out-of-order, and any compounding operations would need to preserve ordering when assembling a compound unit.
I've also been thinking it over, and I think that there's a lot more value in a sparse associative list (or even a non-sparse associative list) than in reducing rationals to integers. The key problem is that it's so easy to lose track of which dimension is which in a long printout.
So something similar was proposed in the same SO question I linked to earlier, you may want to check it out. I think you'd have to prove that your ID's were unique in all cases, including for derived units, which is going to be difficult, but maybe possible with prime numbers. Then there is pi/datum shifts to consider which was most of the hard part with this library.
But again, if you want me to do something about it in this library, just post the case and I'll see what type of improvement I can offer.
I don't think derived units would need unique IDs, only base dimensions. These would serve the primary purpose of defining a canonical ordering of dimensions in a variadic list and (possibly) facilitating a simple equality comparison.
I'll look into reproducing that error. I may also try my hand at some variadic wizardry to see if this sort of thing can be implemented.
Look into boost::hana
if you haven't already. It has some associative CT data structure capability from what I understand, and is probably the best place to start something like what you're proposing.
OK, I have a working proof of concept: https://github.com/EvanBalster/units_variadic_test
This includes:
More remarks later!
— Sorry, I was in a hurry writing that. Such a hurry, in fact, that I forgot to push my working multiply implementation. It'll be online tomorrow morning.
To summarize: The repo linked earlier is a working (under MSVC) proof-of-concept for type checking and combination mechanisms (but not math!) analog to units' function but utilizing sparse templates. It's not extensively tested, but it can manipulate dimension lists while maintaining a canonical order. There is exactly one valid parameter list for any given dimensionality. Best of all, it's easy to introduce user dimensions as long as the UID is unique — no need to create "systems".
The dimensions<>
template performs checks (demonstrated in the current commit) to ensure that dimensions are listed in ascending order of their integer UIDs and have non-zero exponents. Base dimensions are just structures exposing a UID. The template's parameters are a variadic list of dimension<T, ratio<N,D>>
, sorted according to T::DIM_ID
. I'm not sure whether the dimension<>
encapsulation is necessary or beneficial (see bottom).
Inverse (implemented) and power (not implemented) are trivial enough. The type multiply operation (not pushed yet) is implemented by working through the ordered dimension list one item at a time, and summing or cancelling the exponents of any matching dimensions. This is achieved through six specializations which identify and process the lowest-UID dimension, composing the output list recursively. The code for this is actually pretty simple — I managed to whack it together in a few hours without prior experience in variadic templates. That said, I could have ventured into non-standard or C++17 territory so I'll want more experienced eyes to review it.
I'm currently weighing the merits of these two approaches (nomenclature mine)
//A
units::dimensions<units::dimension<units::length, std::ratio<1, 1>>, units::dimension<units::time, std::ratio<-1, 1>>>
//B
units::dimensions<units::length, std::ratio<1, 1>, units::time, std::ratio<-1, 1>>
Option B would make all template implementations somewhat uglier, but substantially shortens the printed type name (and hand-declarations of compound dimensions). So I think B is my preference.
It would be possible to muscle into even terser territory with encapsulating templates... But this might be a little overzealous:
units::dimensions<units::length_<1, 1>, units::time_<1, 1>>
I think there's a bit more development to do before declaring victory, like making sure that you can't violate the order conditions with arbitrary mathematical operations and type deduction.
That said, I'm just still not convinced that this error message is unclear:
meter_t a = 5_m / 1_s;
\units\include\units.h(1554): error C2338: Units are not compatible.
\units\include\units.h(1890): note: see reference to function template instantiation 'T units::convert<UnitsRhs,Units,T>(const T &) noexcept' being compiled
with
[
T=double,
UnitsRhs=units::velocity::meters_per_second,
Units=units::length::meter
]
\units\unitTests\main.cpp(3181): note: see reference to function template instantiation 'units::unit_t<units::length::meter,double,units::linear_scale>::unit_t<units::velocity::meters_per_second,double,units::linear_scale>(const units::unit_t<units::velocity::meters_per_second,double,units::linear_scale> &) noexcept' being compiled
I've pushed the multiply implementation, please give it a look: https://github.com/EvanBalster/units_variadic_test
In the error example you provide, the message is relatively short and with a bit of reading we can find the problem line. A quick examination of the problem line makes the error obvious — but I've run into situations where this isn't the case.
struct Tube
meter_t met = 1_m / 1_s;
To make matters worse, some compilers (clang) will make the error substantially more verbose than MSVC does:
deps/all/units/units.h:1477:3: error: static_assert failed "Units are not compatible."
static_assert(traits::is_convertible_unit<UnitFrom, UnitTo>::value, "Units are not compatible.");
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
deps/all/units/units.h:1808:14: note: in instantiation of function template specialization
'units::convert<units::unit<std::__1::ratio<1, 1>, units::base_unit<std::__1::ratio<1, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<-1, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, units::unit<std::__1::ratio<1, 1>,
units::base_unit<std::__1::ratio<1, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, float>' requested here
nls(units::convert<UnitsRhs, Units, T>(rhs.m_value), std::true_type() /*store linear value*/)
^
src/wreath/carnal/vascular_system.cpp:74:16: note: in instantiation of function template specialization
'units::unit_t<units::unit<std::__1::ratio<1, 1>, units::base_unit<std::__1::ratio<1, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, float,
linear_scale>::unit_t<units::unit<std::__1::ratio<1, 1>, units::base_unit<std::__1::ratio<1, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<-1, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, float, linear_scale>' requested here
meter_t met = 1_m / 1_s;
Where I've run into trouble is in complex code where I use a multitude of dimensions and don't want to be typing out full templates or typedefs. Consider this bit of code:
VascularJoint::Dimensions
jointDims = joint.dimensions;
Vascule::State
&from = (joint.momentum() >= 0.0f) ? joint.a : joint.b,
&to = (joint.momentum() >= 0.0f) ? joint.b : joint.a;
// Get properties of source vascule's contents.
// For now we treat the joint as if it is full of the same fluid...
kilogram_t fromMass_orig = from.mass;
//cubic_meter_t fromVol_orig = from.volume;
auto fromDensity = from.mass / from.volume;
// What is the total mass of the fluid contained in the joint?
kilogram_t jointMass = fromDensity * jointDims.volume();
// movement of fluid in kilogram meters --- NEED TO SIMPLIFY THIS
auto displacement = dt * joint.momentum;
auto movement = displacement / jointMass;
// ERROR ON THIS LINE
kilogram_t flowMass = movement * jointDims.length * fromDensity;
This time around I might need to get a pen and paper to figure out the problem on my own, especially with an error message like this:
deps/all/units/units.h:1477:3: error: static_assert failed "Units are not compatible."
static_assert(traits::is_convertible_unit<UnitFrom, UnitTo>::value, "Units are not compatible.");
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
deps/all/units/units.h:1808:14: note: in instantiation of function template specialization
'units::convert<units::unit<std::__1::ratio<1, 1>, units::base_unit<std::__1::ratio<-1, 1>, std::__1::ratio<1, 1>,
std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, units::unit<std::__1::ratio<1000, 1>,
units::unit<std::__1::ratio<1, 1000>, units::base_unit<std::__1::ratio<0, 1>, std::__1::ratio<1, 1>, std::__1::ratio<0,
1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>
>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, float>' requested
here
nls(units::convert<UnitsRhs, Units, T>(rhs.m_value), std::true_type() /*store linear value*/)
^
src/wreath/carnal/vascular_system.cpp:97:25: note: in instantiation of function template specialization
'units::unit_t<units::unit<std::__1::ratio<1000, 1>, units::unit<std::__1::ratio<1, 1000>,
units::base_unit<std::__1::ratio<0, 1>, std::__1::ratio<1, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, float,
linear_scale>::unit_t<units::unit<std::__1::ratio<1, 1>, units::base_unit<std::__1::ratio<-1, 1>, std::__1::ratio<1,
1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>, std::__1::ratio<0, 1>,
std::__1::ratio<0, 1> >, std::__1::ratio<0, 1>, std::__1::ratio<0, 1> >, float, linear_scale>' requested here
kilogram_t flowMass = movement * jointDims.length * fromDensity;
If this printout more clearly depicted the problem dimensions, I might be able to track down my mistake more easily. In this case, I've substituted jointDims.length (meters) for jointDims.crossSectionArea (square meters)
Variadics test now builds and runs under clang and MSVC.
Clang reports an error on this line:
dimensions<length_1> m = divide_t<dimensions<length_1>, dimensions<time_1>>();
as:
test.cpp:118:23: error: no viable conversion from 'dimensions<[...],
v_units::dimension<v_dimensions::time, std::__1::ratio<-1, 1> >>' to 'dimensions<[...], (no
argument)>'
dimensions<length_1> m = divide_t<dimensions<length_1>, dimensions<time_1>>();
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./v_units.h:81:9: note: candidate constructor (the implicit copy constructor) not viable: no known
conversion from 'divide_t<dimensions<length_1>, dimensions<time_1> >' (aka
'v_units::dimensions<v_units::dimension<v_dimensions::length, std::__1::ratio<1, 1> >,
v_units::dimension<v_dimensions::time, std::__1::ratio<-1, 1> > >') to 'const
v_units::dimensions<v_units::dimension<v_dimensions::length, std::__1::ratio<1, 1> > > &' for 1st
argument
struct dimensions
^
./v_units.h:81:9: note: candidate constructor (the implicit move constructor) not viable: no known
conversion from 'divide_t<dimensions<length_1>, dimensions<time_1> >' (aka
'v_units::dimensions<v_units::dimension<v_dimensions::length, std::__1::ratio<1, 1> >,
v_units::dimension<v_dimensions::time, std::__1::ratio<-1, 1> > >') to
'v_units::dimensions<v_units::dimension<v_dimensions::length, std::__1::ratio<1, 1> > > &&' for 1st
argument
Methinks getting rid of the dimension<> template would go a lot further still in making these templates readable (and writable).
I've always thought the ID approach is flimsy:
struct velocity { static const int DIM_ID = 800; };
and break everything, so someone will, and your approach won't catch it.It's a non-starter for me.
I don't feel very responsible that clang dishes out a poor error in this case. Pointing out a failing static assertion to me seems kind of dumb, since it's obviously the precondition that failed.
I would still find your error message to be annoyingly verbose.
The squiggly errors will be less meaningful once you template the unit into a container, and the unit container into a user class. Pure unit tag errors in this library look similar, maybe even slightly less verbose.
At the end of the day this is a question of debugging skills. So what if the compiler error doesn't pinpoint the dimension? It pinpoints the line of the error, so just do
std::cout << movement * jointDims.length * fromDensity;
and you'll see the dimension in human readable notation such as 10.0 m^2 kg
(or whatever the erroneous result is), and then you'll say "oops, that's not a kilogram". You'll probably want to print the value anyway, in order to verify that the magnitude wasn't incorrect in addition to the dimension, so really you kill two birds with one stone.
I kind of disagree with the fundamental assumption here that my design goal is to provide pinpoint error messages. There are other efforts to fix template errors, and I expect someday these will improve as a result. What I'm going for is:
I'm willing to make small or simple changes that will improve the error readability, but a lot of it is dependent on compilers or use case and I'm not really willing to open that can of worms. I think the readability gain you get, when all is said and done, will be marginal, and I don't think it matters anyway because I think there are better ways to debug the types. I think what you lose in safety and extensibility is not worth giving up, so I'm not going to do a ground-up re-arch.
Ah — that feels a little hasty.
Firstly, I agree that the readability gains are marginal there, and wouldn't be worth the effort on their own. But I think I'm onto something more useful here, if you'll entertain me.
https://github.com/EvanBalster/units_variadic_test/blob/master/test.cpp
I've written a superior implementation with the "B" style parameters I mentioned earlier. Error readability is improved, and the benefits go further — this approach makes it easier to declare new base_unit
types, makes those declarations more self-documenting, makes it trivial to add new user-defined dimensions and, I believe, will reduce compilation overhead.
Here's how a base_unit is defined under the new system, alternating base dimensions and exponents.
// Simple dimensions
using dim_length = dimensions<length, ratio<1>>;
using dim_time = dimensions<time, ratio<1>>;
using dim_mass = dimensions<mass, ratio<1>>;
// More complicated dimensions
using dim_velocity = dimensions<length, ratio<1>, time, ratio<-1>>;
using dim_force = dimensions<length, ratio<1>, mass, ratio<1>, time, ratio<-2>>;
Shorter, more readable and more self-documenting. I don't need to know the fixed order of dimensions in the system to read these. Writing them directly does require order-awareness but that is also the case for the current units library.
Of course, most of us will probably stick to defining things as compound units, which is as simple as ever;
using dim_impulse = multiply_t<dim_force, dim_time>;
Under the current design it isn't even possible for two different unit extension libraries to define new units and inter-operate without shim code. That's no better than boost::units' unit systems. The variadic approach comes with the major caveat that DIM_ID
s must be unique, and I agree that is inelegant. They serve the sole purpose of defining a strict order in which the dimensions are listed. units.h
also imposes a strict order which the user has to learn.
Addressing your concern about silent failures, the templates could be changed to assert-fail whenever they encounter dimensions with different types using the same IDs. Shall I implement that? Putting these checks in assign and multiply operations would be sufficient to allow a library user to track down any collisions.
There may be a way to eliminate the integer IDs entirely by using a type-intrinsic collation order such as the one provided by std::type_info::before
. The problem here is that the order would vary from compiler to compiler, and some kind of dimension-sorting template would be necessary to facilitate user-declared compound dimensions. (This sort operation would actually unburden the user from needing to define their dimensions in a fixed order, though.)
My original argument was that the full template should be self-explanatory, but I'm coming to realize this is one of the weaker cases for this change. Nevertheless, the "B" style provides a relatively terse template syntax compared to the others.
Here's that bug again:
dim_length m = divide_t<dim_length, dim_time>();
MSVC error:
error C2440: 'initializing': cannot convert from 'vp_units::dimensions<v_dimensions::length,std::ratio<1,1>,v_dimensions::time,std::ratio<-1,1>>' to 'dim_length'
Clang is still pretty unhelpful about this, but the template is now terse enough that I can parse out references to the dimensions of concern:
test.cpp:72:13: error: no viable conversion from 'dimensions<[2 * ...], v_dimensions::time, std::__1::ratio<-1, 1>>' to
'dimensions<[2 * ...], (no argument), (no argument)>'
dim_length m = divide_t<dim_length, dim_time>();
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./vp_units.h:58:9: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from
'divide_t<dim_length, dim_time>' (aka 'vp_units::dimensions<v_dimensions::length, std::__1::ratio<1, 1>,
v_dimensions::time, std::__1::ratio<-1, 1> >') to 'const vp_units::dimensions<v_dimensions::length, std::__1::ratio<1, 1> >
&' for 1st argument
struct dimensions
^
./vp_units.h:58:9: note: candidate constructor (the implicit move constructor) not viable: no known conversion from
'divide_t<dim_length, dim_time>' (aka 'vp_units::dimensions<v_dimensions::length, std::__1::ratio<1, 1>,
v_dimensions::time, std::__1::ratio<-1, 1> >') to 'vp_units::dimensions<v_dimensions::length, std::__1::ratio<1, 1> > &&'
for 1st argument
I can't really say this with certainty without branching the units library, but the template lists are more lightweight because they contain only the dimensions they use. I suspect this will make symbol lookup faster. Further, compilation cost does not scale up with the number of unique base dimensions defined, only the complexity of the composite dimensions which are used in a given type.
Have a look at vp_units.h. The mechanisms there aren't significantly more complicated than the current units implementation, with less than 200 lines of code in total to deal with the type checking. It works under clang and MSVC 2015.
Among the above arguments, I think the extensibility and simple declarations are the strongest. I hope you'll consider and discuss (and pardon my wonky nomenclature).
units
. If you do define one (which you won't), it's impossible to do it wrong; it either won't compile, or it will have whatever meaning it's logically supposed to. Base units show up in the template errors (sometimes, not always), but like I said, who cares. It's like seeing std::basic_string<...>
or something in a template error, you just ignore it as an unimportant implementation detail. Most importantly, they can't clash: if you define the same base twice, it doesn't matter because they're are by definition the same thing. It's literally impossible to define length two different ways, while it's very possible to do with unique IDs. (struct length ... 100
, struct distance ... 200
). Show me how to do that with units
.units
the way that you can with IDs, which is its principle feature. Also you'd only be able to catch different types using the same ID if they were combined somewhere. Many silent errors still possible.units
isn't arbitrary, it's well and explicitly defined, in the code. There's no ambiguity about which dimension goes in the named 'meters' template parameter. unit tag readability is what does matter and is very simple already:
namespace time
{
// a second is 1 unit of time
using seconds = units::unit<std::ratio<1>, units::category::time_unit>;
// a minute is 60 seconds
using minutes = units::unit<std::ratio<60>, seconds>;
// an hour is 60 minutes
using hours = units::unit<std::ratio<60>, minutes>;
}
using newUnit = ...
aliases.units
base units are simpler. Just a wrapper around std::is_same<base_unit_1<...>, base_unit_2<...>>
does what your 200 lines do. The real meat of the library is not base units, which you'll come to find out, it's dealing with datum shifts and real numbers, redefining meter_t
type is several levels of abstraction away from a base unit/dimension.type-safe
part of the library guarantee, or not seeing that that's what's going on.I'm pretty sure you can eventually iterate yourself to a different but stable working system. I'm just failing to see that the outcome will be substantially different from what already exists, and I don't want to take a stable, working implementation along for the ride. What's the upside to the risk?
Solid feedback.
base unit readability doesn't matter in units.
It pains me, but you're probably right. Human-readable templates are a "nice-to-have". Or at least that's what the C++ standard library has established. I'm not sure if I agree with it, but I can't really argue against the standard's way of doing things.
No, [integer dimension IDs] fundamentally define the dimension, and the equality or lack thereof of dimensions, and they are not checkable by the type system, thus they cannot be type safe.
Yes, this is a problem, but I think it can be fixed. Using proper type equality is simple enough; the hard part is determining collation order. Integer keys implement the latter in a simple (albeit klugey) way. I looked into ways of collating based on type identity, but I can't find a portable method.
The uniqueness property is not just inelegant, it's unenforceable at compile-time. This is a showstopper, at least for a type-safe, CT library.
Agree, it's a problem. I've already come up with a fix (unless I misunderstand your criticism) — I now use std::is_same
rather than looking at equality of IDs.
See: https://github.com/EvanBalster/units_variadic_test/blob/master/v_dimensions.h
The real risk is users using different IDs for the (logically) same type (but c++wise different type).
Saying it's a problem for the user to be able to define length
and distance
as different base dimensions is like saying C++ is broken because I can define my own complex
template, or a vec3
which differs from yours. It's a non-issue, especially when the core library provides the most useful items. Same applies to users making a fundamental dimension for velocity.
The ordering argument is very different between the two libs. Yours is arbitrary (and counterintuitive, because it actually could be made to work in any order, it's just not as efficient). You force something to fail which logically could succeed, and that's a code smell.
Agree, this is broken and I have a solution in mind. Specifically, I've implemented a template that takes a dimension list and transforms it into a canonical type. (Again I'll invoke the conceptual similarity to std::map here.)
// These all define the same type
using dim_velocity = dimensions<dims::length, ratio<1>, dims::time, ratio<-1>>;
using dim_velocity_a = make_dimension<dims::length, ratio<1>, dims::time, ratio<-1>>;
using dim_velocity_b = make_dimension<dims::time, ratio<-1>, dims::length, ratio<1>>;
Keep in mind that my concern here is the template machinery more than interface design; make_dimension
should probably be the primary way to define a composite dimension from scratch, with dimensions
treated as an implementation detail. And again, YMMV on wonky nomenclature.
unit tag readability is what does matter and is very simple already:
Agree, I'm only concerned with the semantics of base_unit
. The primary unit template is great!
not sure what you are talking about with extensibility of the current design.
I'm talking about the extensibility of base_unit — specifically being able to introduce an arbitrary number of library- or user-defined fundamental dimensions, potentially originating in different libraries, without unduly increasing compiler burden. While these aren't likely to be numerous, they're potentially hard to foresee. My scheme here is extensible like boost's in that sense, except that unit systems don't need to be defined.
compilation overhead is more a factor of the total number of templates used. To generate the same amount of units, you'd need the same amount of types, including base types. Any reduction in base unit parameters will most likely be offset by the more complicated CT/constexpr logic to check equality, since the types aren't the same. Type equality checking on the other hand is basically free since the compiler has to generate it all anyway.
The matter of compiler performance is a bit speculative on both our parts. The only way to know for sure would be to benchmark...
units base units are simpler. [...] The real meat of the library is not base units, which you'll come to find out, it's dealing with datum shifts and real numbers, redefining , and all the actual conversions.
Yes, the work you've done on conversions and datum shifts is phenomenal. I'm just troubled by the limitations of the current base_units
template, and would like to explore improving it.
I'm pretty sure you can eventually iterate yourself to a different but stable working system. I'm just failing to see that the outcome will be substantially different from what already exists, and I don't want to take a stable, working implementation along for the ride. What's the upside to the risk?
Very true; this is why we have forking. I might tinker with an adaptation of units
and get back to you.
I think a more mature iteration with those issues hammered out, which ensures type safety could be intriguing. I'll definitely keep an eye on the fork.
Hey, Nic — hope you're well!
I took some time this weekend to implement this idea in my fork, branching from the v2.3.x code. The implementation is complete (!) and passes all unit tests, along with a few new ones I've added. I made a best effort to conform with style, documentation and naming conventions in the original codebase, changing as little as possible about units.h
.
https://github.com/EvanBalster/units/tree/eb-variadic-dimensions
// nholthaus/units declaration style
typedef base_unit<
std::ratio<-2>, std::ratio<0>, std::ratio<0>, std::ratio<2>,
std::ratio<0>, std::ratio<0>, std::ratio<0>, std::ratio<1>>
illuminance_unit;
// EvanBalster/units declaration style
typedef base_unit<
d::length, std::ratio<-2>,
d::angle, std::ratio<2>,
d::luminous_intensity, std::ratio<1>>
illuminance_unit;
These changes make it possible to separate core functions of the units library from the dimensions and units needed for a particular use-case. In spite of my original arguments, I think that's the most compelling benefit of the variadic scheme: no need to include electrical units in my fluid dynamics simulation (...until there is). If I need data length for my uploader or thaum for my physically-rigorous wizard simulator, I could define them in separate headers. That sort of modularization could help a lot with compile times, making it viable to use units in larger projects on smaller computers...
This should be compatible with any client code that does not define new base_unit
templates explicitly in the old style. Doing so will trigger a static_assert.
Notes:
base_unit_pow
template and revised base unit multiply, divide, inverse, square, cube, square root and cube root implementations.unique_id
.Ongoing work:
units::d::
namespace, to avoid conflicts with units' namespaces. This should be changed later.typedef
reference to their (forward-declared) standard units. Ex. units::d::length::standard_unit = units::length::meter
make_dimension
helper template.
This units library generates many complicated templates during compilation, which can greatly slow compilation (notably under Clang) and produces excessively verbose error messages. The latter is especially concerning, because the main reason to use the units library is its ability to report errors in dimensional analysis to the user.
At the root of the problem is a template design that uses an exhaustive set of parameters to describe a unit. This set comprises, among other things, a rational scale, and eight rational exponents which corresponding to a fixed list of approved base dimensions — subject to expansion as new options demonstrate their utility. The lengthy error messages describing these templates must be carefully examined to determine the fault in the user's dimensional analysis, taking into account the standard order of approved base dimensions.
In my frustration I have contemplated making a branch of the library that uses integers instead of ratios as dimensional exponents, to improve error readability and compile time (and because I foresee no use for fractional dimensions in my math).
I have also seen that boost::units tackles another part of this problem by allowing the user to define different "systems" comprising chosen subsets of library- and user-defined units. Both of these are only partial solutions to the wider problem.
What I'm curious about is whether, with the latest C++ facilities, it's possible to implement base_unit such that any transformations thereof are consolidated to a simplified representation that associates base units with their exponents. That is, something like the following:
or,
Two types of simplification could be considered;
Obviously this is all hypothetical, would require a rewrite of the library, and may be prohibitively difficult or impossible under the current C++ spec. I'm interested in understanding the barriers involved and what improvements might be feasible on this front. I won't rule out trying my own hand at some of these improvements.