Open vrurg opened 4 years ago
FWIW, I'm more thinking about the interface towards core and module developers. Case in point, the IO::Path.child
"secure" handling. It would be nice if one could say something like:
method child(...) {
if CALLER::.at-revision("e") {
# perform secure semantics
}
else {
# perform old pre v6.e semantics
}
}
Attaching the functionality to PseudoStash
es would allow one to do more introspection, which could also be useful in some other cases.
Also, by attaching it to a PseudoStash
, allows one to pass it on, so that the above could be written as:
method !child(PseudoStash: 'stash, ...) {
if stash.at-revision("e") {
# perform secure semantics
}
else {
# perform old pre v6.e semantics
}
}
method child(|c) { self!child( CALLER::, |c ) }
Based on the experience we gained over the last year, I think we can do a few things to improve the situation. Eventually, the whole issue winds down to language revision-specific classes. rakudo/rakudo@6f99017003201b849405a173fcc11c5190fc2bef is an example of a situation which I'm trying to resolve with this proposal. The problem its trying to solve: a method must behave differently for 6.e
. The problem of the implementation (not mentioning nqp::getcomp
which is not up to the task): the class behavior must not depend on caller's language revision as this is likely to cause actions at distance.
The correct solution would would IO::Path
implementation specific to 6.e
, as it is already done for PseudoStash
and Grammar
. Yet, IO::Path
itself is rather big and duplicating it as a whole is inefficient.
Thus, my proposal is to extract most of the class into a role in core.c. Both 6.c
and 6.e
versions of the class would then share common codebase while 6.e
would only have the methods with new semantics.
Similar approach I would use for Raku
class. The first benefit we can have here is method version
being defined as something like:
my $version := Version.new('6.e');
method version { $version }
In turn, this would provide a solution for CORE-SETTING-REV
symbol. For now it is the only way to know exactly what is scope's core revision. Respectively, a few roast tests rely on it. With core-specific Raku
class the tests can use its version
method.
BTW, answering to a @lizmat proposal on IRC, client language version could then be determined with CLIENT::Raku.version
which certainly reads better than anything else.
Getting back to IO::Path
, here is my first view on the technical side of its implementation.
IO::Path
content mostly transition into IO::Path::v6c
role.class IO::Path does IO::Path::v6c { }
.IO::Path::v6e
roleclass IO::Path does IO::Path::v6c does IO::Path::v6e { }
IO::Path
children but to re-declare them in core.e again. Luckily, they're small enough to not cause major memory hog. In either case, they could and perhaps should be re-implemented using the same role-based approach.@lizmat whenever a discussion about language-specific behaviors comes to the point of changing a method semantics, it is always considered a bad idea to rely on caller's language version. First of all, the biggest question of all: whom do we consider the actual caller? How do we handle inheritance? And a couple of other questions I can't remember of right now.
Here's a quick brain-dump of my pondering so far (which is quite a bit, because I'd already been thinking about it in the light of various other issues here).
The initial hope was that maybe we'd get away with needing to do versioning of OO APIs in CORE.setting
, and at least we didn't have any time to design that back when the policy for such changes was hashed out before 6.c
. Some years later, I think it's clear that there's quite a few things we'd like to do, many that I agree with, that are blocked by a solution in this problem space.
I'd vastly prefer that we explore declarative rather than imperative solutions to this problem. I'd also rather we avoid having to do things via. pseudo-packages, which present significant optimization challenges. I'm pondering a solution that hangs off dispatch; that's perhaps a case of every problem looking like a thumb when you're holding a hammer, but I think it has some nice properties.
Effectively, methods (potentially subs too) would be marked with a language version constraint. A method we actually want to remove would be constrained to only be available in 6.e and earlier, for example (aside: it may not be called 6.e, but that's a separate discussion), while a method that should behave differently in form 6.e would provide two multi
candidates each marked appropriately. Then, dispatch would evaluate those against the caller's language version. Since our new dispatch infrastructure is based on storing results at a callsite, then the relationship between the caller and callee version is a constant at that callsite (or at least, as constant as the callee is), so implies no further checking costs on future calls. Further, as callsame
and friends are continuations of the same dispatch, and iterating a pre-determined candidate list, they'd share the same notion of their caller.
About "who is the caller" or rather "where is the boundary", my current thinking is that a sensible boundary might be the compilation unit - because that is the level at which you can place a use v6.d
style directive.
An open question is what of self.Foo::Bar
style calls, which are not subject to the callsame
rules of it being the same dispatch, because...well, it ain't, it's a new dispatch. Though since this is a mechanism for language versioning of the builtins, we could settle on "well, don't do that in version-sensitive code". Basically, you get to do the decision making at the first entry point into the CORE.setting compilation unit from outside of it. This is probably enough. At least for now, I consider us as just designing a solution for CORE.setting
, and the consumers of it are limited in number. I guess we might offer it to module authors too who want to behave differently to different Raku language version consumers, if there's interest, but I'd rather wait and see if there is any interest. Then we can decide, based on the experience we have of it in CORE.setting
, whether there are rough edges we have to address.
About "who is the caller" or rather "where is the boundary", my current thinking is that a sensible boundary might be the compilation unit - because that is the level at which you can place a
use v6.d
style directive.
Absolutely. But it makes good sense from lexical point of view whereas dynamic path could be convoluted and not always make it clear what frame is to be used as the source of the language revision information.
But my biggest concern are situations similar to the following scenario, based on IO::Path
story. Take a 6.e
code using a 6.c
module. The code creates an IO::Path
object and uses it, it child
method works as expected. Then it passes the object into a module's routine and things are getting broken because now the very same object responds differently because it's 6.c
which invokes it. I can imagine the level of confusion of the one trying to debug the problem because the last thing they'd do is going inside the otherwise working great module to debug it.
The biggest problem with this situation is not event the fact that the object behaves differently inside the module. After all, behavior of a 6.e
object could be a surprise to the module too and the outcome of such surprise could be very unpleasant too.
But the problem is that the calling code doesn't have any control over the situation. All it can do is create an object and observe the failure. In the case of verion-specific IO::Path
class there is a solution: either the object will always have 6.e
behavior and this is either doesn't cause any harm to the module (which is likely with the .child
method); or the code can create an instance of CORE::v6c::IO::Path
and use it to communicate with the module clearly knowing in this case what to expect of the class instance.
Moreover, if we're unsure about the language version of the class we're currently using then it is always easy to find out. Not so with method versions where we'd need to determine which exact candidate is about to be called in certain context. And, then again, it's absolutely unresolvable when it comes about internals of a 3rd party module.
I just have refreshed the content of CORE.setting object change guidelines section in the article on versioning. Of the most interest there are item 6 in the numbered list and the last bullet of the next list. The bullet mentions OO::Monitors
case which is of interest in the light of what's being discussed here.
Another subject from the article worth mentioning is that core classes are mostly used for interchanging data. IO::Path
is not that specific as Array
or Int
providing some flexibility in decisions related to it. My role-based proposal solves part of the problem because it is possible to refer to the role where necessary because it'd be shared between class versions. Yet, multi-version classes would have rather good impact in the case of my above example. The module simply won't accept 6.e
IO::Path
objects, saving us from unexpected side effects. (Of course this is only true if type constrains arguments.) And only in cases where the module author clearly understands what he is doing he can type constrain parameters with the base role providing support for both 6.c
and 6.e
class versions.
@vrurg your proposal will always bring the problem that we'd need to act retroactively. Now we know that IO::Path would have needed to be version specific from the start. We didn't know back then. So up until now no one writing a module knew that their signature should have said CORE::v6c::IO::Path instead of just IO::Path. No one did, so all existing code will expect IO::Path objects to behave like the original. Requiring them to adjust their signatures is just as bad as requiring them to adjust to the new behavior in the first place.
On the other hand with @jnthn's proposal, I really don't see that problem. So yes, 6.e code creates an IO::Path object and that object's child method will behave like 6.e code expects (and how it's documented as 6.e behavior). Then it passes the same object to 6.c code and what happens? That 6.c code calls .child and that method behaves...exactly like the 6.c code expected and how it was documented for 6.c at the time that 6.c code was written. Isn't that exactly what we need? The 6.c code will work fine, just as if it was still a world where everyone would only create IO::Path objects like it was 1999^W2015. Like you wrote, that IO::Path object behaving like specified by 6.e could surprise the 6.c code. Not just could, it would and it would definitely break that code! After all, the whole problem is breaking backwards compatibility. Why would the calling code need any control in the first place? What good would it do for it to force .child to behave like 6.e when the receiver of the result was written for a different behavior. How can that lead to any useful result?
That 6.c code calls .child and that method behaves...exactly like the 6.c code expected and how it was documented for 6.c at the time that 6.c code was written. Isn't that exactly what we need?
This would also require me to know the version of the module I pass an object into. Event if some particular cases are OK, in some other cases this could be a problem. The worst of all: unworkaroundable problem.
Like you wrote, that IO::Path object behaving like specified by 6.e could surprise the 6.c code. Not just could, it would and it would definitely break that code!
It's better be broken with a chance for me to produce the right version instance of the object and get that fixed, contrary to trying to find out why something works in my tests and then out of the blue stops working in real code. The cost of debugging might skyrocket on some occasions.
Why would the calling code need any control in the first place?
It may not; but it may need it too. By choosing between "no options" and "there is possible solution" – I'm for the latter. Because I know exactly what would be my reaction to knowing that my Range
gives me False
when empty – and yet when passed into a method the method reacts as if the range is True
simply because somewhere deep inside it is getting Bool
-coerced by 6.c code! And I would have no control over it whatsoever because the method I call – it works with ranges.
Not to mention again that debugging such cases would be a nightmare – as I noticed it in this comment.
It would a special kind of mockery to debug a case of mixed 6.e and 6.c/d calling each other in turns in a deep stack.
Ah, and one more thing. We are biased in our attempts to find a solution for core versioning issues. It worth stepping back look from user's perspective.
It feels wrong to have a problem-solving issue discussed in a commit comments. So, I'd try to move it from over there.
On Monday, 17 April 2023 15:36:24 CEST Vadim Belman wrote: I'm against that idea because same object/class behaving differently in different language would not only be a source of huge confusion but also be a source of unsolvable problems. Say, an instance of a core class is passed from 6.e module into a 6.d module – and it stops working as expected.
Expected by whom? It will work as expected by the author of the 6.d module. That module will be written with all the peculiarities of 6.d core objects in mind. It will be able to deal with a 6.d interface of a core object. It will not if that object behaves differently.
See above. Basically, the problem is too complex and balancing between different "bads". I'm just trying to avoid the "bad" which would have no solution on the user side.
This actually brings the exact same problems you see in the dispatch solution: my IO::Path $path = $some-foreign-object.give-me-a-path; can fail because that foreign object is written using an older language version, so it's IO::Path is different from your's.
But then if I suspect a problem of this kind I can always ask what $path.^language-version
is. And say to myself: "Aha!". And there is no need for me to know module's version.
And what if $some-foreign-object does not even create the IO::Path object by itself, but just passes it through without any type constraint? You'd have to chase down the whole chain to know what's going on.
It wouldn't matter that much because it is much, much easier to determine what language declared the class, than to find out where an object I create could end up in. As one can see, there is no need to trace down the origin of an object, after all.
It just've crossed my mind that with a proper coercer multi method COERCE(CORE::v6c::IO::Path) {...}
we can rely on my IO::Path() $path = ...
to give us our language version of the object.
And, of course, it is always possible to use dispatching to chose behavior based on what version we get.
The matter of versioning and backward compatibility doesn't have simple solutions. But let's not do it the way where one would have no options.
That 6.c code calls .child and that method behaves...exactly like the 6.c code expected and how it was documented for 6.c at the time that 6.c code was written. Isn't that exactly what we need?
This would also require me to know the version of the module I pass an object into. Event if some particular cases are OK, in some other cases this could be a problem. The worst of all: unworkaroundable problem.
Why would I need to know the language version of that module? It will work like advertised because it's getting objects that behave like documented at the time the code was written. It's quite the opposite, with your proposal I'd have to think about this all the time. When I pass an IO::Path to any method, I'd first have to check, what kind of object that receiving object expects and somehow convert what I have to what it needs. That would be a nightmare.
Like you wrote, that IO::Path object behaving like specified by 6.e could surprise the 6.c code. Not just could, it would and it would definitely break that code!
It's better be broken with a chance for me to produce the right version instance of the object and get that fixed, contrary to trying to find out why something works in my tests and then out of the blue stops working in real code. The cost of debugging might skyrocket on some occasions.
No, it's better to not be broken in the first place. Why would something that works in your test not work in production? You are giving some code an object and that object will behave like the code expects it to. End of story.
Why would the calling code need any control in the first place?
It may not; but it may need it too. By choosing between "no options" and "there is possible solution" – I'm for the latter. Because I know exactly what would be my reaction to knowing that my
Range
gives meFalse
when empty – and yet when passed into a method the method reacts as if the range isTrue
simply because somewhere deep inside it is gettingBool
-coerced by 6.c code! And I would have no control over it whatsoever because the method I call – it works with ranges.
The code you are passing that Range object to, it expects empty ranges to boolify to True. It does so, because it is calling into 6.c code - same as the day it was written. It would be breaking if that Range suddenly behaved differently. There is no need for you to have any control, because nothing breaks in the first place.
It would a special kind of mockery to debug a case of mixed 6.e and 6.c/d calling each other in turns in a deep stack.
Where is the difference between having to look at an object's .^language_version
and a methods signature to find out what they expect?
It worth stepping back look from user's perspective.
I am. Why do you assert I am not?
But then if I suspect a problem of this kind I can always ask what $path.^language-version is. And say to myself: "Aha!". And there is no need for me to know module's version.
So instead of everything just working fine, I have to add debug prints and defensive coercions. How is that better?
You keep bringing up some vague potential problems, but what concrete problem would be unsolvable? As long as the problem is just a hand wavy "something could break", the "no options" is just an unfounded assumption. We cannot asses options if we don't even know the problem.
Why would I need to know the language version of that module? It will work like advertised because it's getting objects that behave like documented at the time the code was written
Consider a case where documentation says, roughly: "If argument boolifies to False we gonna do this; otherwise – this". You make sure your object is False
. Then pass it to the method – kaboom! Good if this happens at development time – and even then the cost of development could be higher due to hours lost on trying to find out WTF is going on. But what if the case wasn't tested accidentally?
Speaking of development time, it is not exaggeration. My work project sometimes compiles long enough to let me read some news until I can see outcomes of the changes made. Consider the debugging cycle of change - compile - test if the problem is somewhere in a top-dependency file, requiring to recompile most if not all of a distribution!
Where is the difference between having to look at an object's
.^language_version
and a methods signature to find out what they expect?
Sorry, it's a bad idea mixing up work and discussion. :) I miss your point here and how is signature is involved?
It worth stepping back look from user's perspective.
I am. Why do you assert I am not?
Perhaps because you see just one side of it. I mean, it is clear to me why you like the dispatch-based solution. In ways I like it too.
So instead of everything just working fine, I have to add debug prints and defensive coercions. How is that better?
You keep bringing up some vague potential problems, but what concrete problem would be unsolvable? As long as the problem is just a hand wavy "something could break", the "no options" is just an unfounded assumption. We cannot asses options if we don't even know the problem.
There is no good solutions to versioning problems. Simply no. I see downsides of my approach too. You call that "vague problems" – I call that modeling a situation and trying to foresee outcomes of one and another solution. Of course we don't know the problem yet – because there is no implementation of the dispatch-based approach and no ways of testing it in real life. Mind models is all we (I) have.
And yes, perhaps it's my background says, but I know too well what it means "no choice for you". Even in programming, long ago I escaped the world of Pascal to use the freedom of C; then I deliberately rejected Python for the very same reason. So, yes, having a choice means a lot to me.
Aside of that, I like the consistency of versionized classes compared to context-uncertainty of dispatch-based solution.
I think at this point I just have nothing more to add. It looks like if there is any vote happens my voice would be alone. So, I wash my hands.
I miss your point here and how is signature is involved?
If you want to know whether a multi method call will behave differently depending on language version, all you need to do is look at the signatures of multi candidates.
Perhaps because you see just one side of it.
How do you know what I do see or don't? The dispatch based approach is simply the only viable solution that has been proposed so far in the years we have been talking about this problem. In multiple attempts to find a solution, we always gravitate towards this, because anything else just brought more problems that it solves. The only known alternative that has a chance of working is to just bite that bullet and accept that we just cannot change these interfaces. I.e. pick new names for methods with different behavior and (in documentation) deprecate the old ones.
Mind models is all we (I) have.
Sorry, but that's a trivial excuse. If we went with that, we could never do any risky change ever, because there's always the chance of some unforeseen problems it may cause. The way to get out of that is to do the actual work of thinking through possible cases, doing experimental implementations and gathering experience. That's how all of Raku came to be.
And indeed, you did bring a concrete example of something where you assume to have no options:
"If argument boolifies to False we gonna do this; otherwise – this". You make sure your object is False. Then pass it to the method – kaboom!"
Like that Range object you were talking about. But Raku has got you covered: $obj.do-something($range but False)
and now it will boolify as false regardless of language version of do-something. Suddenly "no option" turns into "trivially fixed". And that's why we need to look at concrete examples. I can't tell you what the options are, if I don't know the problems you need to solve. We'd have to stay at "I assume there are no options" vs. "I assume that in a language as powerful as Raku, there are always options".
Sorry, but that's a trivial excuse. If we went with that, we could never do any risky change ever, because there's always the chance of some unforeseen problems it may cause.
You got it all wrong here. It's not because I'm afraid of unforeseen problems – it's because I see a problem.
I wouldn't even mind about an experimental implementation. Though it would have a weak point: not seeing obvious problems right away wouldn't mean there will be no problems and disappointments on the long run. Think of the fact that 6.e is barely used a lot now. Most code works on the default 6.d, I think. And especially rare case is mix up of modules of different languages. Otherwise we would likely have noticed side effects of using augment
in 6.e core already.
But Raku has got you covered:
$obj.do-something($range but False)
And you get an always False
range unconditionally.
Anyway, I didn't want to get back to this, but then some more thoughts have popped up.
What about MOP NQP code? When it invokes a version-dependent callback routine then what version is to be invoked? And why? What one would do to get called the version they want, not what the compiler thinks it must be?
Performance. In many cases it would be necessary to convert routines into multi-dispatch versions. One way or another, multi-dispatching brings in some penalties, both CPU- and memory-wise. I don't know how costly that would be, no judging here. But per-version class approach wouldn't have this problem.
I would like to refer to the concrete situation that triggered the topic. Standalone Rakudo support comes to mind, again.
This particular emptiness problem has been proliferating the Rakudo sources for a long time probably, see also https://github.com/rakudo/rakudo/issues/5143 and I keep warning people against using elems
for an emptiness check ever since.
Anyway, the point is that the wrong behavior has never been a part of the Raku language. Breaking changes in the Raku language are an interesting (and completely untested) topic but here we can witness something much worse: Rakudo bug compatibility is being masked as a language change.
I frankly don't know what the point of use v6.c
is if somebody wrote a Raku compunit today but that doesn't matter and "state of art" use v6.d
is also affected anyway. These lines imply language compliance, not Rakudo version. When we acknowledge the discrepancy between the intended language behavior and the de facto Rakudo behavior, and we try to address it with some use <language-version>
line, we have the following choices:
Resolution 1 and 2 hint that the neglection of specifying Rakudo has been a problem, and having some way to tie user code with the runtime environment would have been beneficial - please, don't wipe this off the table yet again just because it doesn't solve the question of breakage. Moving forward, it does solve it, and hopefully most Raku code simply hasn't been written; also "no breakage for anyone we don't even know about" cannot be guaranteed, ever.
Resolution 3 and especially 4 mean that again Rakudo abuses its monopolous position to define the language. (I listed number 3 for completeness's sake but I doubt anybody would think it's okay for something this mundane to be knowingly undefined.) How and why would you be interested in implementing a language if you are enforced to comply with the bugs of another compiler? At this point, I think it would really be more honest to say that the Raku language is embodied by Rakudo and whatever Rakudo folks tell users about the stable interface, is the stable interface, period.
Or finally acknowledge that the code people see broken is Rakudo code, not Raku code. If Rakudo will naturally remain the only working compiler ever, it surely doesn't hurt them much to have an option to tie the code to the runtime it works with.
FWIW, as part of solving an annoyance in splice, I have (without foreknowledge of this discussion) implemented the dispatch-based approach via a new is revision-gated("6.X")
trait.
At least now we can see what shakes out of this new approach. As it stands, I think a fairly obvious next step is to submit a PR to address that elems issue...
Though it would have a weak point: not seeing obvious problems right away wouldn't mean there will be no problems and disappointments on the long run.
How is this in any way singularly applicable to the dispatch-oriented approach and not also to the approach you are proposing @vrurg?
Regardless, I encourage any and all to go forth and break this new thingy in as many interesting ways as you can :)
@lizmat latest work on solving #198 brought up a few additional questions in the area of handling different language revisions:
CORE-SETTING-REV
symbol be a part of language specification? It was introduced as a part of implementation of6.e
support. But it feels more as an implementation details to me than something to be standardized. For this moment it doesn't have an alternative but perhaps I have an idea.Raku::version
method is currently mostly meaningless becausenqp::getcomp
is likely to return not what a module developer with explicituse v6
might expect.