Open WaffleLapkin opened 1 year ago
Would it make sense to have multiple default implementations of an item and choose one of them depending on whatever another item is implemented?
Imo that is almost necessary to have that. Otherwise the default impls are forced to form a cyclic graph so that the dependencies always hit an implemented one. But for larger sets of alternatives those chains are likely suboptimal implementations. Even if there's only 2 options one might still want the other default bodies that aren't part of the cycle to directly choose which one they use.
Tracking issue looks great! Can we get a draft PR adding this (as must_implement_one_of
) to the reference?
(To clarify, that's not a blocker for using this on nightly, that's a blocker for stabilizing read_buf
with this attribute applied to Read
.)
@joshtriplett I'm not sure it's worth adding to the reference, as I said in my opinion it's quite incomplete.
must_implement_one_of
could also be used for the Wake
trait, avoiding dilemma of if wake
or wake_by_ref
should be defaulted.
(potential) Uses
Another one is Iterator
. Currently next
is the mandatory method to implement. With this attribute implementations could also offer try_fold
or next_chunk
as their base methods.
@the8472 sadly... this is not actually possible. try_fold
requires Self: Sized
, so you can't call it in next
(play).
Unless you remove the bound from try_fold
, which I think is a breaking change? (and also makes the vtable bigger, but that's unlikely to be a problem)
try_fold
requires Self: Sized
because it has a generic and can't be used with dynamic dispatch (dyn Iterator
). However, it might still be possible to implement next in terms of it, but I don't know how viable or useful that would be.
This circles back to
Would it make sense to have multiple default implementations of an item and choose one of them depending on whatever another item is implemented?
next
would then only be implementable in terms of next_fold if try_fold is implemented, e.g. because Sized is always true.
The core neccessity for this feature , in my opinion, first of all comes from the default implementation validness assertion, which compiler cannot (at the moment), and from the design standpoint even shouldn't track. The need for restriction of trait's MCD is just byproduct of such requirement.
Instead of looking at "completeness" of trait implementation from outer, trait "POV", we can always say, that for trait to be minimally implemented, each of its items should be minimally implemented.
Currently, this perspective view change leads to next requirement (as stated in issue description) for member minimal implementation - Member should have default implementation or Member should be implemented in impl block.
This default requirement is constructed from three conditions:
We could change condition 1 to be Member should have valid default implementation, where validness of default implementation is defined by additional requirements in the form of attributes, e.g. Default implementation requires another member to be implemented in the impl block in the form of
trait Foo {
#[requires_impl(b)] // or even #[default_requires_impl(b)] for more clarity
fn a() {
Self::b()
}
#[requires_impl(a)]
fn b() {
Self::a()
}
}
This allows not only to understand minimal requirements when implementing trait as a whole, but also provides a little perspective into design of intended behavior of each trait member and their interconnection, which could be handy e.g. in the case of big "internally co-dependent" traits like Iterator
.
As for multiple default implementations for member, I am not sure of cases, where it can be usefull, but if it was to be done, member-level approach would be even more appropriate, as each default implementation could be treated as separate "pseudo-members", each with their own requirements, which then can be "eliminated" and coerced to a single MCD of the member.
But for this to work compiler should be able to ensure directly at the trait definition site, that no ambiguity between available default implementations is possible, if member-wise requirements were satisfied. This task seems to be much more complicated, as it requires "negative requirements", but on the other hand, there's no need fot it to be part of this proposition. And if it is needed later, this feature could be built on top of the core "member-wise requirements" (or even "member-wise requirement tree") assertion for MCD.
@chernoivanenkoofficial I sympathize with your idea (and I think I've heard similar ideas before, so you are not alone thinking this). I'm a bit concerned though that it will make checking for breaking changes even harder, since the requirements will be spread throughout the trait.
@WaffleLapkin Could you elebarate a little bit further on this problem?
From my point of view, this is negligible for small sized traits. And for big sized ones, piling a huge stack of attributes on top-level could be chalenging (and a bit ugly) in itself already, and then, if any breaking change was to be brought, re-evaluating possibly complex and nested reuirements ("by hand") gets even more tricky.
The only real benefit is an opportunity to compare them against each other "one-to-one" and tightly packed, which is also done "by hand" in such scenarios. And there is also a higher chance to not only introduce breking change, but do it without neccessity or incorrectly, which cannot be checked by compiler, as it couldn't infer itself, if we mean using "client-provided" implementation, or some kind of weird recursion (which loops us back to the original issue). Of course, the same problem applies to member-level requirements, but they are easier to track, due to being declared at the "definition site".
Also, we could have compiler (or some external tool, like cargo-expand
does for derive
attributes for example) compile all of member-level requirements into trait-level-like list of MCD requirements, thus keeping the advantages of both approaches and reducing drawbacks. We could of course say, that it's possible the other way around as well, but inferring member-level requirements from top-level is a more complex task.
On the other hand, a need for external tooling is hardly a positive argument, although not negative either in this case, in my opinion. And of course, I could have just incorrectly understood your remark.
@chernoivanenkoofficial I think you understood my comment quite well. I basically thing that seeing the whole graph of requirements is useful, both to get the "bird view" and to review a diff in a PR. I feel like with annotations per-method it might be easy to introduce non-local / non-obvious change.
While tooling might solve some problems (you should probably use cargo-semver-checks
anyway and I expect rustdoc to generate something for this feature), while reviewing diffs most tooling is basically unavailable (I wish it wasn't like that...).
All that being said, I also see the appeal of per-method annotations. I don't really want to decide which option is the best I just want someone to implement r-a support so we can finally use this in std.
Maybe it would be better to consider having actual logic with this attribute? I can easily imagine a cyclic dependency between function (or constant) foo and two functions (or constants) bar, baz like
foo = (bar, baz)
bar = foo.0
baz = foo.1
This might be useful for config traits, especially with constants
I propose something like
require(any(foo, all(bar, baz)))
As for multiple default implementations for member, I am not sure of cases, where it can be usefull, but if it was to be done, member-level approach would be even more appropriate
Those annotations would serve a different purpose from defining the graph itself since it can also apply to methods that aren't part of a cycle. E.g. Iterator::fold
could pick between try_fold
, next
and next_chunk
even though fold
itself does not form a minimal-implementation-set with anything.
It might even be applicable to impls (like specialization), not just the trait's default bodies. E.g. iter::Map<I, F>::fold
might choose a different body depending on which of I::{next, fold, next_chunk, try_fold}
have a non-default impl.
So it probably makes sense to have either a trait-level or method-level annotation for defining the requirements and then another one for selecting the default body
trait Iterator {
#[requires(try_fold | next_chunk)]
fn next(&mut self) -> Self::Item
#[if_implemented(try_fold)] {
}
#[if_implemented(next_chunk)] {
}
}
This is a tracking issue for attributes changing "Minimal Complete Definition" of a trait. "Minimal Complete Definition" is effectively a set of rules you need to follow (by implementing items of the trait) in order for a trait implementation to be "Complete" (and compile). Normally there is only one rule β implement all items that do not have defaults. However, sometimes it is meaningful to add additional restrictions.
As a simple example consider a trait with functions
equal
andnot_equal
, both of them can be implemented as!the_other_function(...)
, but only if the other one has a meaningful implementation. Thus, it may be beneficial for the library design to make Minimal Complete Definition be "you must implement at least one ofequal
andnot equal
".A more realistic example would probably be performance related β one function is easier to implement, while the other theoretically allows a more performant implementation (see
Read::{read, read_buf}
).Current status
This is currently implemented as an internal unstable rustc attribute
#[rustc_must_implement_one_of]
. It accepts a list of identifiers of trait items and adds a requirement that at least one of the items from the list is implemented. A usage example:It is hopefully not the final form of the attribute, but just a "MVP". Some ideas that we might want to explore in the future:
a
is mutually recursive withb
,c
is mutually recursive withd
, so MCD is(a | b) & (c | d)
)#[requires(...)]
be easier to use/reason about?Iterator
, where a lot of methods can be implemented in terms of each otherStability
While the attribute itself is very unstable and "MVP", we can still use it in stable and unstable APIs, if we are sure that this is something we wish to support in the future. Thus, we need to be able to control the stability of default implementations (to be able to test if we even need to change MCD of a trait, for example).
For this we have 2 tools. We can either require an unstable method (
#[rustc_must_implement_one_of(something_stable, something_unstable)]
, we can add a default implementation tosomething_stable
without worrying β to use the default implementation one would need to opt-in into unstable feature to implementsomething_unstable
), or use the#[rustc_default_body_unstable]
attribute.#[rustc_default_body_unstable]
attribute, as with any other stability attributes, allows to set a feature gate to using a default body. For example:About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however not meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Steps
Unresolved Questions
See the "Current status".
Implementation history
(potential) Uses