Open nrc opened 10 years ago
I want this all the time when writing lots of generic code.
Massive agreement here.
I missed this issue and posted another version here https://internals.rust-lang.org/t/implicit-module-arguments/5022
In effect, I'm wondering there if modules could perhaps not use the usual Rust type parameter syntax but instead refer to items like types in the scope using them. These are parameters would be viewed as implicit instead of explicit. You read the scope to determine the module's parameters, not the call site.
The reason for making module parameters implicit would be that they are usually library configuration parameters. We want them out of the way so that code does not read as overly generic.
It might resemble :
mod ParamaterizedModule {
implicit type 'some_lifetime;
implicit type TypeParam : ParamaterTrait;
implicit const ConstantParam : Type;
...
}
The bad part about this implict
formulation is that afaik 'some_lifetime
cannot exist at the module level, so you cannot just use
this module except from inside an fn
or impl
.
@burdges: I think your proposal conflates things, in a way I find rather problematic.
implicit
is not a keyword, IIRC, which has back-compat impacts.use
), then what values do they get?
impl Trait
RFC thread)use
site, which I would consider nice to have,In short, I think that proposal would not work out well at all, and strongly prefer:
mod foo<'a, T: Trait, const x: usize = 3> { ... }
- a declaration syntax that permits reusing all of the nice machinery we already haveuse foo<Vec<u8>> as foo_vec
, an invocation syntax that is explicit in its behavior, and has a clear meaning to anyone familiar with Rust.If module-level parameters are merely a shorthand for additional parameters on the items of the module, the latter can be left to later, and eventually defined as a similar shorthand for setting those parameters.
EDIT: Ah, I'd missed your "use whatever's fitting in-scope" bit. That's the least workable part of this, IMO - it's akin to Coq's curly-braced parameter declarations, which really only work because Coq is a proof assistant, and can provide incredibly detailed bounds on them. Without that, I suspect they'd be the next thing to useless, because an unbounded type parameter could be satisfied so many ways it's absurd.
As I posted in the internals thread, I think I agree with @eternaleye that implicit passing should be skipped. There are no soo many use lines that this will make anything awkward.
In any case, I do like the idea of type and constant parameters for modules. I mostly just wondered if this would be an opportunity to do something implicit really well.
Perhaps this paper would be of use? It brings parametrized modules to Haskell:
Perhaps this paper would be of use? It brings parametrized modules to Haskell:
Or look at how OCaml does it (especially soon-coming implicit module support), which does first-class modules in a way that is perfectly efficient at run-time and compiles significantly faster than HKT's while supplying the entire power there-of (and far more, OCaml's HPT modules can do many things that HKT's cannot). An OCaml-like first-class module system would pretty well fix most (all?) of the remaining higher typing problems in rust.
@OvermindDL1: ML-style Functor-based modules have problems of their own; a significant motivation for Backpack was providing a similar level of power while avoiding those downsides. It's based on an earlier work by the same authors, called MixML, which entirely subsumed the features of ML-style modules.
ML-style Functor-based modules have problems of their own; a significant motivation for Backpack was providing a similar level of power while avoiding those downsides. It's based on an earlier work by the same modules, called "MixML," which entirely subsumed the features of ML-style modules.
OCaml's modules are not as limited as SML modules and although I've done a cursory glance over the MixML documentation I was not able to see what it gained (other than the examples being slower to compile). Over standard ML modules MixML seems to add recursive module definitions (which can be done in OCaml already) and mixins (already supported in OCaml), however OCaml supports a great deal that ML/SML do not support, including going far beyond MixML, those being that I can name off the top of my head:
Among others that I cannot recall off hand. But overall yes MixML is higher than ML, but OCaml is much higher still, and OCaml's module design is well worth copying for 2 very very large reasons (not the syntax of course), those being compile-times, they were designed for being extremely easy and efficient to handle especially during optimization (very large OCaml projects with large amounts of modules still only take seconds to compile hundreds of files, unlike rust...), and the second being that they grew up based around necessity, it is not a Haskell'ish style research project, it is a real-world language that had its features added to solve problems, regardless of what they are, and consequently it is a very well tested style of which the only down-side is syntax verboseness that Implicit Modules solves soon anyway.
Also, I've not touched SML/ML itself in a decade, only OCaml for ML-style languages, so they might have some more developments as well...
If you're module is a file, then I suppose the syntax might be mod<...> where ...;
or mod self<...> where ...;
once early in the file.
I just want to note that "ML-style modules" is a much bigger feature than the "parameterizable modules" that are actually being proposed here. In particular the crucial part of "ML modules" is functors, that is, modules parameterized over modules, which (together with the ability to specify module signatures, by analogy to type signatures) allows one to express various kinds of ad-hoc polymorphism*. Allowing parameterization over modules together with module signatures would be a huge addition to Rust, especially since Rust already has its own solution for ad-hoc polymorphism (traits, a.k.a. type classes).
What is being proposed is merely being able to parameterize modules over types and lifetimes, which is a considerably smaller addition, and really only goes toward "abstracting the same things, just with less typing" rather than "new and more powerful abstractions". (Although abstracting a module over a type with a trait bound, mod foo<T: Bar> { ... }
, ends up being roughly similar to an ML functor.)
* ("Polymorphism" is maybe not exactly the right word here given that functoring is explicit, but it's used for the same things.)
@glaebhoerl I'd put it a different way… Rust already supports parameterized modules and module signatures; they're called impls and traits (with singleton types). If more expressivity or sugar would be needed to make them as useful as in ML, perhaps it should be added, but it should be an extension of the existing system, not its own separate thing.
I'd think modules parameterized over constants, types with trait bounds, and lifetimes provide an ergonomic and productivity win. There are frequently parameters you do not want to focus on in a first pass, so modules parameters give a "canonical easiest" way to export to them to the caller.
You can achieve similar functionality with a associated types and constants in a trait for singletons or uninhabited types, but now you're invested in deciding what goes into this trait and how it should parameterize every item in your module. I suppose const
parameters may help eliminate that trait, but you'd want inherent impl
s to contain traits, structs, enums, and type aliases. They cannot right now.
It'd be lovely if mod M<..> where .. { ... }
were functionally equivalent to
enum M<..> where .. {}
impl M<..> where .. { ... }
In fact, if file modules supported parameters then conceivably inherent impl
s could be placed into files with some syntax like impl Foo<..> where .. mod foo<..>;
As an aside, there are nice translations in ML Modules and Haskell Type Classes: A Constructive Comparison including some explosion in complexity in both directions. In my reading, there are new "interesting" ways to obtain features from ML-style modules, but Rust already has most like privacy, namespace management, etc. via pub
, use
, etc.
I suppose zero allocation futures https://github.com/alexcrichton/futures-rs/pull/437/commits/0f12f1d32ed18bb34605175d6961b00319da0971 might benefit from module parameters. @leodasvacas
As an aside, there are nice translations in ML Modules and Haskell Type Classes: A Constructive Comparison including some explosion in complexity in both directions.
As an aside, this seems to ignore the upcoming "Implicit Modules" coming to OCaml style ML Modules, which pretty well fix the verbosity of the witness passing in ML style modules, making them about as succinct as type classes but significantly faster to compile and better able to optimize the output code.
But yes, I would definitely choose OCaml-ML-style modules over just parameterized modules as it would give a lot of safety, power, be quick to compile, and has been very well tested for decades.
I suppose const parameters may help eliminate that trait, but you'd want inherent impls to contain traits, structs, enums, and type aliases. They cannot right now.
I think they should be able to. Sugar can come after that...
Is there any progress on this feature?
Maybe time to revisit this issue? I have a use-case:
gfx-rs
defines an abstraction layer (HAL) which is implemented by multiple back-ends (dx, metal, gl, etc.). However, there are multiple incompatible versions of OpenGL bindings (OpenGL vs WebGL). Most of the code in the GL backend (this code is quite large and is spread among several modules) could be shared, if only there was a way to be generic over which OpenGL bindings were used.
It's not really possible to workaround this by making everything within the back-end generic, because there are a huge number of type aliases, structs and free standing functions which would have to be made generic.
I would be happy with mod Foo<T>;
syntax where T
is just automatically in scope for the directly enclosed module's code file. Generic parameters would be accessible from nested module via super::T
, and could be made publicly visible from the generic module via pub use self::T
.
My understanding is that this is fundamentally incoherent and so can't happen. Could be wrong though!
@steveklabnik what makes you say that? Generally coherence is only a problem cross-crate, but there's only one crate involved here?
I don't remember the exact details, to be honest. I think @withoutboats knows?
(Yeah generally mod Foo<T>
should be equivalent to mod Foo
with every definition therein having an extra <T>
generics parameter... if there's an issue with this I'd expect it to be around items which can't meaningfully have one, not coherence at least in the sense of trait impl uniqueness.)
How do you deal with PhantomData
and unused parameters with parameterized modules? I don't think it should be a problem, but I thought I'd bring it up. It might potentially make for very confusing code and non-local reasoning to have to add 👻📊 to a type definition when there is no type parameter directly on the type definition.
I'd imagine mod Foo<T>
would only provide T
as an optional type parameter for every item contained within, and that T
must be specified/inferable at the usage site, or maybe even the use
site, but if the type truly goes unused then it actually does not become a real type parameter and does not impact variance for that item.
If you later use T
in an item, then it becomes a real type parameter, but this alone cannot become a breaking change because some T
must already be present at usage sites. If your new usage impacts variance, then might be a breaking change, just like if you change variance today.
Is there any notion of variance for items like struct
s, enum
s, closures, etc. that do not explicitly look like types? I think fn
s have an associated anonymous type, but no relevant variances. I've no idea how trait objects manage variance, but traits seemingly do not require variance information.
I'd assume parameterized modules would take their own variance from the types they contain, yes?
See
The idea of a module level variance seems too broad, but I like the idea of items in a module not being parametric if they don't use the module parameters or have their own parameters.
I don't understand why y'all would implement this anemic attempt at module functions when you have a perfectly serviceable applicative module system in the trait system. With some minor extensions, one could use the trait system to pull all of this off, aiui, without the major extension of adding functors to the language.
I suppose parameterized modules do not require variance anymore than traits require variance, but if you instantiate a parameterized module then instantiating particular types might become impossible.
What would that look like? I suppose
mod foo<T> {
...
}
use foo::<MyT>::*;
becomes
trait Foo {
type T;
...
}
struct MyFoo;
impl Foo for MyFoo { type T = MyT; }
use MyFoo::*;
We'd seemingly need:
impl
blocks and probably inherent impl
blocks,use Type::whatever
for anything without a self
argument, not just enum
variants, anduse
declarations in traits and presumably impl
blocks.Are those extensions all viable? I recall previous RFCs with use
in traits or impl
blocks all stalled, but maybe only because their usage was inconsistent with this usage for use
, not sure.
There is yet another approach using only struct
s and inherent impl
s that goes like:
struct Foo<T>;
impl<T> Foo<T> {
...
}
use foo::<MyT>::*;
which provides some advantages over traits, but runs into the PhantomData
issue.
It's worth noting that https://github.com/rust-lang/rust/pull/48411 will very likely allow "hypotheticals" to work - i.e.
fn bar() {
/* SomeTrait is not presumed to be implemented */
}
fn foo() where (): SomeTrait {
/* uses SomeTrait::bar() */
}
Note that the bound is not on a parameter of foo()
.
Hypotheticals like this are very closely related to nullary typeclasses, as well as module signatures, as a module doesn't really have a meaningful Self
type.
For example,
mod foo {
use bar;
const X: usize = bar::Y;
}
could be equivalent to:
trait foo {
const X: usize;
}
impl foo for () where (): bar {
const X: usize = <() as bar>::Y;
}
via a trivial desugaring that "an imported module name
desugars to <() as name>
" combined with a slightly less-trivial signature-separation desugaring.
At that point, I feel there are a number of ways the module system could be extended and empowered, all of which would effectively boil down to mere syntax sugar over traits.
Adding parameters could be done easily enough, perhaps with the added requirement that parameters be specified both in the mod foo<T>;
location and in a for<T>;
at the head of the separate module file.
Because the desugaring would enforce that the Self
type is ()
, the only place the implementation could occur would be in the module itself, avoiding the risks of allowing full separation of signatures and bodies.
That power could be added later, in a backwards-compatible manner, if it was found to be desirable (cough cough, global allocator backends, cough cough).
Maybe worth revisiting this idea, since there seems to be at least one widespread case that's causing pains: async runtime selection. Funnily, I came up with exactly the same idea independently.
Thoughts on this: @Centril I was thinking about PhantomData too and I think it wouldn't be too bad to get an error message about unused implicit type parameter if the message clearly pointed at the line containing module-level declaration and had a good explanation of the error. The user of the feature has to intentionally write it in the module anyway.
@strega-nil Could you give an example of how would you reduce the boilerplate of writing generics everywhere using existing trait system? I don't see a way to avoid sprinkling every item with some kind of generic declaration.
@eternaleye Trivial syntax sugar is exactly what I was thinking, maybe it can be done even more trivially? I was thinking just compiler "copy-pasting" the signature from module to every item at the beginning of generic declaration (just as self
parameter is implicit in some languages, but physically present as zeroth argument when compiled).
@Kixunil For one, please take everything I said from more than a year or two ago with a grain of salt; I was not a nice person, and I've grown a lot in the past three years.
My thoughts were something like "allow named traits", but I can't say I have a good design for it.
@strega-nil Ah, I was hoping you had some nice trick. Thanks for explaining, happy to hear about your progress!
I would like to be able to parametrise modules by types and lifetimes. Type parameters are useful where many items in a module should use the same concrete type. E.g., taking some implementation as a parameter we want to ensure that all functions and data types in a module use the same implementor without annotating every item with the same type parameters. Likewise, parameterising by lifetimes is useful if we are to assume that many objects in a module have the same lifetime. This is especially useful in conjunction with arena allocation.
Details
Module declarations may have formal type and lifetime parameters and where clauses, e.g,
mod foo<X, 'a> where X: Bar { ... }
.Module uses, including in use expressions which include an alias, can have actual type and lifetime parameters. E.g,
let x: foo<int>::Baz = ...;
oruse foo<int, 'static> as int_foo
.The usual rules around well-formedness wrt bounds, and inference would apply.