Closed brson closed 6 years ago
Is there any update on this?
Checking types which depend on constants which depend on types is a thorny issue, and I think that that's where we are at right now. See #1062. If we could work out how to use associated constants in generic code better, that would get us a lot closer to type-level numbers (or constants in general).
It's maybe worth signaling @edwinb and @david-christiansen here, although Idris itself takes a far more drastic solution.
I would be interested if they have any comments. Rust has a couple of considerations that are different from some other languages that already have dependent types (including Idris):
Array sizes are based around the primitive integer type usize
rather than on inductively defined type-level naturals. We normalize by fully evaluating the expressions that specify sizes, i.e. the compiler actually knows (or can lazily evaluate) the exact size of every array as a 32/64-bit integer during type checking.
This falls apart the moment we allow a constant expression to contain a value that we don't actually know during type checking, e.g. a generic constant parameterized by some unknown type. So the challenge is to come up with a new method of normalizing constant expressions during type-checking, something backwards-compatible and not confusingly inconsistent with current features.
I would love to see this as well. My particular use case would be implementing something like:
fn kalman_filter<S, N>(prev_state: [[f64; S]; S], obs: ([f64; N], [[f64; S]; N])) -> [[f64; S]; S]
3x3
matrix.I'm saying this as someone who is merely an enthusiastic observer of Rust, having never actually used it. So please take it with a major grain of salt.
I wouldn't think that Idris-style dependent types would work particularly well in a language that isn't designed for them from the get-go - especially not in a language with side effects! I would imagine that, instead of automatically promoting user-written functions to the type level, it would be better to have a sub-language of allowed operations on the numeric types. Then, you can use techniques like Presburger arithmetic solvers or SMT solvers that know how to solve equality constraints over these expressions that arise during type checking.
@yav somewhat-recently extended GHC to support type checker plugins that can do whatever they want with equality constraints. He's used this to implement type-level naturals at https://github.com/yav/type-nat-solver - this is probably a better place to look than Idris.
@david-christiansen
What do you think about only promoting an explicit subset of the language to the type level? I wonder, whether the recently introduced const fn
may qualify for this purpose… At least, as far as I know, this part is free of side-effects so far. Of course, this makes reasoning about equality significantly harder.
There is still the idea to allow parametrization over values of more general types. So we may want to avoid doing too much special-casing that works for type-level numerics only. Maybe this is too ambitious altogether. But even if we do it for numerics using some specialized solver first, I'd prefer to keep it extensible, so that the option to move to something more general is available.
I don't know how well this would work for Rust (my instinct is that you'd need to redesign quite a bit), but I have sometimes wondered if we could have an imperative language with full dependent types by having only the pure subset available at the type level, more or less as you suggest.
This isn't even that different from what we do in Idris, in the sense that, at the type level, Idris treats non-total functions as constants, so you don't quite have the full language available. I don't see why something similar couldn't be done in principle for an imperative language, but I expect there will be interesting challenges in practice.
It's also a bit like what happens in programs using the Idris effects library - there is a separation between the pure programs which effects can be parameterised by, and the effectful programs themselves.
There's probably a PhD or two in this if anyone fancies having a go :)
On 29/06/2015 10:01, Eduard Bopp wrote:
@david-christiansen https://github.com/david-christiansen What do you think about only promoting an explicit subset of the language to the type level? I wonder, whether the recently introduced |const fn| may qualify for this purpose… At least, as far as I know, this part is free of side-effects so far. Of course, this makes reasoning about equality significantly harder.
There is still the idea to allow parametrization over values of more general types. So we may want to avoid doing too much special-casing that works for type-level numerics only. Maybe this is too ambitious altogether. But even if we do it for numerics using some specialized solver first, I'd prefer to keep it extensible, so that the option to move to something more general is available.
— Reply to this email directly or view it on GitHub https://github.com/rust-lang/rfcs/issues/1038#issuecomment-116539293.
cc @jroesch @freebroccolo @kmcallister
I have been busy most of the last week, really the last few weeks, so I'll give a few belated responses now. (I'll spare you the full details, but the punchline is a car accident last week, in which I was luckily not injured.) I regret that I can't really connect my thoughts to Idris, since I haven't learned it all that well yet; however I hope that my experience with Haskell bridges the gap a bit.
@yongqli I believe that #1062 would be a good start toward using constants this way. If it was accepted and implemented, you could use associated constants as a "workaround" until the dependent type situation improved. However, treating 3x3 matrices specially would require other changes to Rust. I am interested in some similar problems, like specialization for SIMD. (Operations on 2D/4D/8D vectors might be sped up relative to other sizes.) It's not clear how Rust will handle that kind of specialization.
@david-christiansen I appreciate your input (and I hope I'm not too verbose in replying). I think that @aepsil0n has given a similar response to what I would have said. Even though Rust 1.0 has (mostly) stabilized the language, the relatively new and unstable const
function feature provides the possibility of having a pure subset of the language that can be promoted to the type level. (In fact, I've somewhat felt that this is the point of most const
language features in Rust.) Because of this, I think that being able to check for equality of types is the real problem, whereas purity of operations is a relatively manageable issue (at least in theory).
I've gradually come around to @freebroccolo's point of view (as I understand it, which may not be very well). For backwards compatibility, the compiler presumably has to evaluate constant expressions as completely as possible (under our current understanding of "evaluation", i.e. where you can literally get some primitive value at the end). But past that, we need some kind of solver to deal with generic code, and we aren't very constrained by compatibility. It is possible, but not ideal, to build an SMT solver directly into the compiler without a detailed explanation. (The problem being that any "obvious" generalization of Rust's current arithmetic capability is probably undecidable, and in any case, the effect on compile times would be unpredictable in practice. For a systems language, of course, for most code a sufficiently dedicated/intelligent/experienced human should be able to tell whether it will compile, and roughly how it will scale, without knowing the gory details of the internals of a specific compiler. One awkward aspect in developing a new systems language is that we want the reference compiler to be excellent, while simultaneously wanting the language to be specified clearly enough that other compilers can arise and work well.)
It's almost certainly better to do something like what @yav has done with GHC and allow plugins to deal with arithmetic during type-checking (though this may be a ways off for Rust, since we have only vaguely planned a stable API for syntax-phase plugins, and don't support type-checking-phase plugins yet at all). But if a particular representation and set of operations for type-nats can be defined at the plugin/library level, we can define types/traits in the standard library that provide a standard representation. At least in theory, this provides the opportunity for a decent standard solution for users like @yongqli and I, who have specific use cases that only need simple checks, while more advanced users can pick different plugins/solvers if for some reason the standard implementation fails on certain cases.
@edwinb I'm thinking that that was mostly just an offhand comment, but I'm about to go back to school for an applied maths PhD, so maybe we should talk. Having been both a physicist-in-training (think Fortran 95) and the amateur developer of Rust's associated constant capability, I've certainly thought about imperative languages plus dependent types. =)
Actually, something like Haskell's DataKinds might not be a bad approach. The difficulty seems to be that we already allow primitives (particularly usize
) at the type level.
@quantheory interesting, our background and interests seems to be very similar. i have actually been thinking a lot about applying more advanced type system features such as this or full dependent types to numerical simulation techniques. I'm mostly approaching this from a numerical fluid dynamics point of view. Rust's type system is not quite powerful enough to do too fancy things, but using Idris for this application domain appears to be out of the question for performance reasons (although that last statement probably requires some benchmarking).
There are things like compile-time physical units (fairly straight-forward, once this feature is implemented) or efficient numerical solvers for certain kinds of equations that make use of information available compile-time for code generation. I'd love to talk about these things in more detail and share ideas at some point (perhaps independent of Rust). But, for now, just file this under motivation and use cases ;)
I think it might be worth pointing out here that @paholg has already implemented signed Peano numbers at the type level in rust:
http://paholg.com/doc/dimensioned/peano/index.html
Personally, I'd love to have type-level numbers not so much for new features (since we can already implement them ourselves) but for good error messages and in order to be able to write generic code that works with arrays. I feel like comparison, equality and addition and subtraction would by themselves be sufficient for a whole lot of benefit, without danger of adding (much) to the complexity of a human understanding rust code, or the dangers of undecideable code.
Wow, this crate is awesome!
For floating-point numbers in types, @mrmonday started a discussion about it in the Idris community, which ended up as the issue idris-lang/Idris-dev#2609 where type unification in the presence of floating-point numbers allows you to prove False = True
i.e. Void
.
Not having reflexive type equality might hurt rustc's type-handling performance, which is why I was against it, but it turns out that affects the type system itself, not just the implementation.
There's also some discussion of this over in the D forums, if anyone's interested: http://forum.dlang.org/thread/rvgslgwzvqwchpowhntg@forum.dlang.org.
@eddyb honestly the thing with floats in the Idris issue shouldn't be surprising because they are just poorly behaved with respect to equality and algebraic properties.
Personally, I think the only reasonable approach here is to use a proper (co)algebraic or topological encoding of real numbers and force the types/specifications to be approximate. Something like Abstract Stone Duality (see Baeur's Marshall and Taylor's slides) would probably be the way to go.
This would probably make them less convenient to use but it would be better than doing something unsound with the type system.
Adding "type level integers" to the title because I can never find this RFC with search
If and when Rust adds syntax for dependent typing, including "type-level numbers", it should be very generic. The syntax should be compatible with the promotion of suitable datatypes and traits like Haskell's, although of course Idris's type-level values are much cleaner. Permitting only the pure, total subset of the language is suitable for this.
Is there anything that is being done in this area?
I'm not well versed in the area of type systems, so I can't really judge the feasibility of this feature. But right now I see the following not-so-nice things in Rust:
[T; 0]
through [T; 32]
. See array itself, for example.Clone
are only implemented for arrays up to length 32. It's very surprising if adding a number to an array can cause the array to become un-cloneable by default. Also, I think the number 32 is quite arbitrary, right?Please point out the mistakes in my statements, if there are any :)
not possible to create a function that does "multiplication of two vectors that must have the same length" at the type level
Vector (Vec
, that is) size is a runtime property.
adding a number to an array can cause the array to become un-cloneable
You can't really "add" a number to a fixed-size array since it's going to change its type (but yes, the cloneable argument is valid).
@aldanor I think he meant array.
Vector (Vec, that is) size is a runtime property.
Yeah sorry, I was talking about a vector in the mathematical sense, provided as an array :)
You can't really "add" a number to a fixed-size array since it's going to change its type (but yes, the cloneable argument is valid).
Here I didn't mean adding a number at runtime, but maybe adding one in the source code and trying to re-compile. And all of a sudden the .clone() fails that worked previously.
Actually (to be pedantic), mathematical vectors do not have to have a known size.
When this is done, it should be done in a generic way, such that any datatype can be elevated to the type level (as in Haskell and Idris). Constant functions should then be operable at the type level.
@alexchandel Not exactly "any datatype", but only those that permit structural equivalence.
I'm actually not in favor of expansion of this feature to cover arbitrary data types, like Idris and Haskell. Not right now anyway. Designing that feature properly will take a lot of time, and we'll surely have to go through several rounds of discussion. But most of the use cases here need a very simple feature only: the ability to abstract over the N in [T; N]
. That's it: they don't even need arithmetic at the type level in most cases.
I think the way this minimal feature could work is likely to be uncontroversial and simple both to specify and implement: it's kind of obvious how it'd function. So let's just focus on the bare-bones feature for now, which will solve a whole bunch of pain points in a simple way, and then we can look into expanding the feature later. If we let this succumb to scope creep, then we may end up having a discussion that continues for months about dependent types while the ability to write basic array abstractions remains missing.
I agree 100% with Patrick.
The sooner we can abstract over fixed size arrays, the sooner we can make Copy: Clone
actually respected in the language. Please just figure something out that gets that, and the rest can be dealt with later.
@retep998 function items/function pointers also suffer from the same. The family of all function pointer types isn't reachable via impl
either.
@bluss That's arguably solvable for the pointer types by introducing VG (to get arbitrary numbers of arguments) and allowing specific simple forms of generics to apply to HRTB - @nikomatsakis could tell you more about the latter.
One idea I had w.r.t. abstracting over fixed-size arrays (while thinking about various ways to generalize our current dynamically-sized types), which doesn't even require type-level numbers, was that we could allow the syntax A: [T]
. In other words, we could allow using [T]
as a trait bound. This would be symmetric with current trait
s, which can also be used both in trait object (corresponding, in this case, to an array slice) and trait bound position. Like T: Trait
in general, A: [T]
would mean that A
is a particular type which is a fixed-size array, containing elements of type T
, but of unknown size. So each occurrence of the A
type variable would be known to be the same specific type with the same specific size, again, like generics in general.
So these would be roughly equivalent:
impl<T, const N: usize> Clone for [T; N] { ... }
impl<A: [T]> Clone for A { ... }
Would the impl overlap checker be OK with the second one?
This would be even less expressive, I think, than even the most minimal formulation of const
generics, but it seems like a natural extension of features we already have (no new "top-level" features, just fewer restrictions on how already-existing features may be used together), and may suffice for some cases.
(Perhaps we could also hang an associated const usize
off of the trait [T]
, to get at the array's size.)
@glaebhoerl It's very similar to the current mechanism in FixedSizeArray http://doc.rust-lang.org/nightly/src/core/up/src/libcore/array.rs.html#47-63 except the overlap check does not understand it in a way that's useful.
@glaebhoerl The way I see it, if you did that it would be introducing a piece of syntax which would be made redundant if the full feature is implemented.
Allowing integral generic arguments is something with much more use than simply parametizing array size.
I think filling semantic holes is a worthwhile endeavour either way but YMMV.
@glaebhoerl Since @bluss mentioned it: It would be much easier to add that behavior to Unsize<[T]>
, if you wanted an unstable band-aid, than introducing another language feature.
I wouldn't be against it personally (e.g. I encourage building contraptions based on T: Unsize<Trait>
, where both T
and Trait
are type parameters), but I can't speak for everyone, you would have to open a RFC and find out.
I've thought about having a lang item trait FixedLengthArray
(and
probably another for Tuple
) that fills this role. That would also
permit us to add some coercion methods like &self -> &[T]
and so
forth.
On Fri, Mar 25, 2016 at 08:40:22AM -0700, Gábor Lehel wrote:
One idea I had w.r.t. abstracting over fixed-size arrays (while thinking about various ways to generalize our current dynamically-sized types), which doesn't even require type-level numbers, was that we could allow the syntax
A: [T]
. In other words, we could allow using[T]
as a trait bound. This would be symmetric with currenttrait
s, which can also be used both in trait object (corresponding, in this case, to an array slice) and trait bound position. LikeT: Trait
in general,A: [T]
would mean thatA
is a particular type which is a fixed-size array, containing elements of typeT
, but of unknown size. So each occurrence of theA
type variable would be known to be the same specific type with the same specific size, again, like generics in general.So these would be roughly equivalent:
impl<T, const N: usize> Clone for [T; N] { ... } impl<A: [T]> Clone for A { ... }
Would the impl overlap checker be OK with the second one?
This would be even less expressive, I think, than even the most minimal formulation of
const
generics, but it seems like a natural extension of features we already have (no new "top-level" features, just fewer restrictions on how already-existing features may be used together), and may suffice for some cases.(Perhaps we could also hang an associated
const usize
off of the trait[T]
, to get at the array's size.)
You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/rust-lang/rfcs/issues/1038#issuecomment-201337380
You can think of [T]
as already being a trait, which as an accidental and artificial restriction, can currently only be used in trait object position :)
@glaebhoerl Again, that's exactly what Unsize<[T]>
conveys, except for the coherence interactions.
[T]
cannot really be treated like a trait - its alignment and drop semantics are static, not dynamic.
I've tried to express that several times in the past and it gets so hairy it's no longer a trait.
Again, that's exactly what
Unsize<[T]>
conveys, except for the coherence interactions.
Yes, that may be. The sentiment I'm trying to express is that plain A: [T]
is infinitely better UX. Why clutter it up with unnecessary-confusion-inducing Unsize<>
if you don't need to?
[T]
cannot really be treated like a trait - its alignment and drop semantics are static, not dynamic. I've tried to express that several times in the past and it gets so hairy it's no longer a trait.
That seems potentially more significant. Could you elaborate?
@glaebhoerl Trait objects in general make all information about the type dynamic, and every vtable (even if the trait has no methods) includes a destructor, the alignment and the size of the type.
To model [T]
as a trait, say trait Array<T> { const N: usize; }
, you'd have to express several facts, namely that:
Self
is covariant over T
, not invariant like other traitsfor x in &mut self { drop_in_place(x) }
align_of::<T>()
N
associated constantN
associated constant to begin with, which can be somehow accessedAnd then have the compiler optimize all of that into a single piece of metadata and no vtable. In the end, the complexity involved is enough for you to describe arbitrary DSTs, and I'd rather have #1524.
@eddyb Thanks, that's useful info. Now I see that I think we've been talking on different levels. (I feel like this happens often?) When I say, "you can think of [T]
as already being a trait", what I primarily mean is that it would make sense for the language-user to think about it this way on a conceptual level, and for us to package it that way for them. While what you hear is that it would make sense to represent things that way in the compiler. Point taken that this wouldn't be practical, but it also wasn't what I had in mind. :)
Anyway, just to avoid running further circles around this, let me clarify my basic contention. Which is that if we would be interested in providing something along these lines as a minimalistic solution for writing impl
s that are generic over fixed-size array types -- that is, if it's something normal people are going to be using, and not just the internal implementation of libstd
-- then:
A: [T]
is the most user-friendly syntax to use for it, because it reads kind of obviously ("the type A
is an array of T
s"), because it rather suggestively uses the same syntax as the types it classifies(!), and because of the symmetry with traits as already discussed. We could call it something different like FixedSizeArray<T>
or Unsize<[T]>
, but why, when this basically-perfect syntax is there for the taking?[T]
-- would be important to make this actually-useful in practice, otherwise impl<T: Clone, A: [T]> Clone for A
would just overlap with everything else and there's not much benefit. The A: [T]
syntax is also much more suggestive of this "special" property than something with an English name, which superficially just looks like any old trait.But if, for instance, it turns out that it wouldn't be powerful enough to express even many of the most basic trait impl
s we want to be able to express, then that would be a good reason to not bother.
(I think that's about all there is to write about this so I'll probably stop here?)
In general I'm disinclined to have language constructs that don't mirror machine semantics out of perceived design coherence/orthogonality unless there's a really compelling reason to do so. I feel like in the past when we tried this we always regretted it. For example, in early Rust all types had a less-than operator defined for orthogonality reasons, but we had to jump through hoops to balance compile time and runtime performance for all those (what we would now call) PartialOrd implementations, and we never really succeeded.
@pcwalton Hmm, I don't really feel like I understand either half of your comment...
In what way does this fail to mirror machine semantics, or score any worse on that count than other existing features?
What kind of orthogonality would motivate having a universal operator <
? The line of thought which makes me think A: [T]
would be a decent idea doesn't make me feel like that would be anywhere close to a good idea, so I don't really grasp the analogy. I know a few languages have something like this (OCaml, but only for equality IIRC, and Erlang, but at least it has the excuse of being dynamically typed), but I think of those as basically hacks which were likely the result of painful but necessary compromises in the particular circumstances.
What does prevent it from implementation?
Just hit this issue the other day as well when I wanted to simply clone an [u8; 4096]
. Apparently Clone
isn't implemented for an array of bytes. What could possibly be more cloneable? Please make this possible.
@ERnsTL It is implemented for arrays up to size 32 if I'm not mistaken.
@ERnsTL as a hack, you can use ptr::read
. Just remember to make sure that the inner type is actually copy.
@ticki I thought all [T; N]
were Copy
iff T: Copy
, hardcoded in the compiler, only the Clone
implementation doesn't exist.
Copy requires a Clone bound.
@ticki Yes, but the compiler returns "implemented" without checking anything other than the contained type.
This is a postponement issue tracking postponed RFCs in the general area of "functions generic over a constant" or "generic constants" and so forth.
Example use cases for items parameterized by a constant:
[T; N]
for anyN
SmallVec<32>
)Example use cases for generic constants:
const FOO<T> = SomeType { value: None::<T> };
(const fn
can also address this)Here is a list of relevant RFCs: