Closed bradcray closed 6 years ago
My main complaint with the name postInit
is that it's not verb-y, and I'd like method names to indicate some action taken on the object. So you can start to use it in a sentence starting with the receiver.
completeInit
or finishInit
or finalizeInit
are all more verb-y but have one drawback or another (mainly - is "initialization" already done by the time the current "postInit" is called? Or is running "postInit" part of "initialization"?).
I don't have any concrete concerns about this proposal, but I do worry that we're doing something "new" and that we might be introducing pitfalls/problems that we're not thinking of because we're not Object-Oriented programmers.
I liked that we were basically piggybacking off of swift with the original proposal. I understand that that model breaks down for us with dependent/runtime types, but I wonder if there's other languages that we could look at.
In particular, I don't know of any language that has a post-init sort of functionality, and I have (likely unjustified) concerns that this will break down for deep object hierarchies or at the very least introduce unexpected/confusing behavior for a user-class that inherits from some library-class.
Anyways, I guess I don't really have anything useful to say or suggest, but I always worry when we do something different for things that aren't our differentiators (parallelism, distributed-mem, etc.)
Anyways, I guess I don't really have anything useful to say or suggest, but I always worry when we do something different for things that aren't our differentiators (parallelism, distributed-mem, etc.)
Arguably runtime types for arrays are a differentiator since they can make working with arrays in generic code easier. Why do we care so much about array types? Because we have domains and distributed domains and because we want it to be easy to declare arrays that have the same distribution.
Anyways, I guess I don't really have anything useful to say or suggest, but I always worry when we do something different for things that aren't our differentiators (parallelism, distributed-mem, etc.)
Arguably runtime types for arrays are a differentiator since they can make working with arrays in generic code easier. Why do we care so much about array types? Because we have domains and distributed domains and because we want it to be easy to declare arrays that have the same distribution.
I agree that runtime types for arrays are a needed differentiator. The part that's not obvious to me is if that forces us to invent a new initialization strategy or if there's another language we could follow. i.e. we can't follow what swift did, but is there some other language we can? (C++, D, something else)
w.r.t. Michael's concern about naming: In my mind, once init()s return, the object is initialized to that level of the class hierarchy (but not fully constructed—i.e., the new
hasn't completed yet), which is why postInit()
is attractive in descriptiveness ("do this after initializing"), though I agree it's not very verb-y. I also like that it's short and sweet. I have mild regrets that it relies on camel-case (most of our other language-defined identifier names don't—e.g., this()
, these()
, init()
, deinit()
, etc. Unfortunately things that my head goes to like finalize()
or complete()
seem vague...
To Elliot's concerns: I feel like we did pretty extensive surveys of various initialization approaches in the original specification of initializers, and am a little wary of thinking that there's some holy grail that we just don't know about. I also think that our first stab at initializers has proven to be strong enough, and that this is a small enough step from it, that it doesn't feel deeply risky to me. As an OOP academic sibling of mine advised when I was expressing nervousness about adding objects to Chapel at all in year one due to fears of "which path to choose?", he thought that I should interpret the wide variety of approaches to OOP not as an arena of complexity to avoid getting caught up in, but as an indication that there are lots of ways to approach it and that we should feel empowered to make the decisions that are right for us.
That said, here's how I'd more directly address Elliot's concerns about postInit(): I think this new initializer story completely holds together without supporting postInit()
(i.e., that the init()
only part of it would be sufficient... it definitely is for the challenges that the first draft of initializers led us into). That said, I feel more confident and comfortable having the postInit()
hook because (a) it preserves the Swift-like notion of a phase 2 supporting dynamic dispatch to child class methods for users who want/need/think they need that capability; (b) it lets users who lean on compiler-provided initializers primarily for a given class/record, but gives them the ability to add their own custom hook without rewriting the whole initializer from scratch. So in comparing the proposed approach to other languages, I'd suggest wondering whether init()
alone is sufficient (and I think it is, and is similar enough to C++-like approaches to feel confident that we're not carving out a crazily unique path). And then I'd view postInit()
as a Chapel-specific feature/quirk in aid of productivity that isn't necessary but adds a small bit of convenience / power.
I've added links to the CHIP and presentation that introduce this now, as well as cross-references to related, optional issues that extend the core proposal.
How easy is it going to be to explain to morons like myself what exactly goes into init()
and what into postInit()
?
Brian: Easier than it is today (for either constructors or initializers), I believe. The complexity will largely depend on the complexity of your type hierarchy. If you're just doing records and base classes (those without inheritance), I believe it'll be quite easy (and definitely easier than today).
Let's add "convenience initializers" to this new proposal.
Just like with the current initializers, init() can contain this.init(...)
instead of super.init(...)
. The effects of this.init(...)
are:
All fields of the current class or records are initialized, so the enclosing init
method cannot initialize them anywhere.
The initDone()
point (see #8289) has already occurred. (This is independent of whether such a point can be specified by the user explicitly or not.)
The first bullet is hard to contest because the init() invoked by this.init(...)
definitely initializes all fields and potentially performs actions that rely on all fields having been initialized.
For the second bullet, we can go either way. I.e. we can consider initDone()
not having happened yet upon the return from this.init(). Unless the user indicated initDone explicitly in the other init(). We need to specify which way we choose to go.
Just like with the current initializers, init() can contain this.init(...) instead of super.init(...).
This is a great point, and I'm very embarrassed not to have thought of it in drafting the proposal. My intuition would be to support your second bullet, though I can see reasons to go either way.
I'd like to reconsider the piece about gradually changing the dynamic type of the object being created a-la C++ for init() with class hierarchies. I feel this adds complexity and results in dynamic dispatch working differently within methods invoked from initializers, directly or indirectly. Instead of doing that, I suggest the following.
Consider these parts of init()
:
Phase 1a - "pre"-code: initial computations, create some temps, no access to the initialized object or its fields.
Phase 1b - initialize the fields of the current class.
Phase 1c - the rest of the initialization.
Given a class hierarchy A (base) -> B -> C -> D (derived), the execution order is:
Phase 1a : D -> C -> B -> A, then
Phase 1b : A -> B -> C -> D, then
Phase 1c : A -> B -> C -> D.
At the start of Phase 1c, all fields have been initialized. Phase 1c has full access to the initialized object. Its dynamic type is D, regardless of which initializer is executing, without the gradual change.
P.S. The postInit()
method is unaffected by these considerations.
One way or another, we need to define the behavior of deinitializers to mirror that of initializers in this regard.
C++ community adopted the guideline "never call virtual functions during construction or destruction" - see this or this . "Calling virtual functions from a constructor or destructor is considered dangerous most of the times and must be avoided whenever possible."
Interestingly, C# seems to perform normal dynamic dispatch from constructors/destructors (which is unsafe), according to this blog .
A comment in that blog mentions an interesting case where dynamic dispatch is desirable (so C# works better) - when using factory methods:
class Base {
public:
virtual MyBaseControl* newControl() { return new MyBaseControl(); }
void useControl() { control->doit(); }
Base() {
control = newControl();
useControl();
}
private:
MyBaseControl* control;
}
class Derived : public Base {
public:
virtual MyBaseControl* newControl() { return new MyDerivedControl(); }
Derived() : Base() { }
}
When creating a Derived
object, we want MyDerivedControl::doit()
to be invoked, and more importantly an instance of MyDerivedControl to be placed in 'control'.
A follow-up comment states that "being able to use factory methods in the constructor is very useful".
@vasslitvinov - I agree that the behavior of (virtual/dynamically dispatched) method calls in init might be confusing. I interpret the guidance you linked to about C++ as "style" guide type information that warns that it's something that confuses many people and it's probably best avoided. However that doesn't necessarily translate into what we should do in our language design.
In particular, I think being able to pass this
somewhere else (even to just print out part of it / print its address) during an init
method is important, at the very least for debugging. Similarly, I think it's worthwhile to be able to call methods on the already-constructed portion of an object. These might commonly include things like field accessors.
However, our language does not generally make a big syntactic difference between virtual and non-virtual methods. Likewise, we don't annotate what uses of this
might result in a method call. Thus, I think it'd be hard to rule out the virtual case without making initializers quite a bit more restrictive than they need to be.
With the proposed design, I do think that a guidance along the lines of "Call dynamically dispatched methods in postInit
and not in init
" is reasonable and could even one day form the basis for a compilation warning.
Regarding your phase 1a/1b/1c proposal, we already have 1a and 1b (arguably) as the portions before and after super.init. We also already have 1c in the form of postInit. I think it's reasonable for postInit to be a separate method rather than a different section of the init method, because otherwise the control flow would be extremely confusing. Thus, I think the proposal already includes the important elements in the 1a/1b/1c idea but in a relatively less confusing manner.
I was thinking a little bit about postInit and how, for records with the compiler-default init
, it shares some similarities with D's postblit method. That got me thinking about records or classes containing many fields (and note we might conceptually consider an array to be such a record, with one field per element). For such a record, an init
method will necessarily make quite a few postInit calls, as each field is initialized. That will come at some performance cost.
I think we're expecting postInit
to be used much less frequently than init
. As such, I think in the compiler implementation we should take care to avoid calling postInit
at all (including the virtual dispatch) in the event that we know the record or the class hierarchy has only trivial postInit
methods. Note that such an optimization requires whole-program knowledge, because in a library setting we can always imagine adding a new derived/child class that adds a postInit. Anyway, we already perform one element of this optimization - by detecting overridden methods & making the non-overridden ones statically (not dynamically) dispatched. So, in actuality, performing this optimization might amount to merely marking the postInit
methods as candidates for inlining. I just think it's important that we are aware of the potential performance issue here.
Back on @vasslitvinov's initial point about this.init()
calls - I can see either position but would argue for not changing the behavior of method calls between initializers that utilize this.init()
and initializers that utilize super.init()
. I think it would be confusing, and make analysis especially hard in the case of conditionals that determine which one is called, e.g.:
if someBool {
super.init();
} else {
this.init();
}
I think the presence of this.init()
also significantly complicates the proposed extension of the initDone
equivalent.
This would argue for placing the CID transition prior to the postInit()
call for the derived class and after every super.init()
call has finished
Vass reminded me of the need to specify the behavior for this.init() in conversation. I had been planning to create a new issue with this topic and remain inclined to do so even though some discussion has occured here. Any objections to my doing so now?
@noakesmichael: I think splitting the this.init() part of the conversation off into its own issue due to it being fairly chunky and separable seems wise.
One way or another, we need to define the behavior of deinitializers to mirror that of initializers in this regard.
I think this should be split off into its own issue as well, if it needs one. At present, I'm not seeing any way in which today's deinitializer story comes up short in any way.
we already have 1a and 1b (arguably) as the portions before and after super.init
To clarify: in the "new" proposal "1b" also contains the "switch the dynamic type of the object" magic "then you can all any methods on self, with unusual behavior." In my view it is cleaner to have "1b" just initialize the fields, and leave 1c to make method calls. This allows method calls on self to have the normal dynamic dispatch semantics. In this case, there is still value in having the optional postInit() aka Phase 2 (wtih the optimization that Michael proposes).
Vass wrote:
I'd like to reconsider the piece about gradually changing the dynamic type of the object being created
Michael said:
...the control flow would be extremely confusing.
This is my concern with the 1a, 1b, 1c proposal as well: It's much harder to describe and to implement than the current proposal. Specifically, I like that the new proposal can be described approximately as translating new C(args)
into:
this = alloc_object(C);
this.init(args);
this.postInit();
Whereas the 1a, 1b, 1c proposal doesn't have an equivalently straightforward rewrite nor a straightforward way of explaining the control flow that takes place within an init()
routine. As I understand it, it essentially says "upon hitting the point when all fields are initialized, suspend execution and return to the child initializer, but with the intention of jumping back to this point to complete the execution, after which we'll jump back to the similar point of interruption in the child's initializer." This seems like a lot of complexity to implement and explain, and I'm not sure it's worth the benefit of not having dynamically dispatched method calls behave differently in initializers. (Specifically, I think having the dynamic type of this
"evolve" as the type matures and becomes the intermediate classes is natural and even powerful rather than something to be avoided).
A simpler alternative to Vass's proposal which would have a similar benefit but be easier to explain would be to say "you can't refer to this
at all within initializers other than to set that class's fields, and everything else you might do must go into postInit()
. I think this would cripple the initializers too much.
I do not buy the complexity argument. About a year ago something very similar to the "new" proposal was rejected on the grounds of complexity.
OTOH the 1a/b/c adjustment will not bring tangible benefits until Chapel is used to implement complex class hierarchies.
To say "someone called a proposal like yours complex once and therefore it's not reasonable to dismiss what I'm proposing as being complex" doesn't hold water for me. In the proposal, control flow follows normal program order: functions are called and they return. In your counterproposal, functions are called, then they return at arbitrary points only to later resume back to that point again—a style of control flow that does not currently exist in Chapel functions. To me that seems more complex than is ideal, both to understand and to implement. And without seeing a clear rationale for it, it feels like unnecessary complexity as well.
I see some of the initializers in Buffers.chpl have an argument out err: syserr
. Is there any way for a postIinit()
method to contribute to that out
value? (I could imagine it being hooked up so as to work in the variants where postInit()
gets the same arguments as init()
, but it could be confusing to allow both to store to their err
argument.)
How does throwing an error in postInit()
compare to throwing one in init()
?
Let me be more explicit about that first part: I was going to try the exercise of porting the initializers in Buffer.chpl to the various variants, but I couldn't even get through the first one for this core proposal. In master today:
proc bytes.init(len:int(64), out error:syserr) {
this.home = here;
super.init();
error = qbytes_create_calloc(this._bytes_internal, len);
Since _bytes_internal
uses the default value from the record definition, I think I can't reference it in init()
. I could get at it in postInit()
, but then I can't get the error code back to init()
's out error
.
I was working on ChapelSync.chpl conversion on Friday afternoon, then mulling it over this weekend, and came across a challenge similar to Paul's:
...but I couldn't even get through the first one for this core proposal. In master today:
proc bytes.init(len:int(64), out error:syserr) { this.home = here; super.init(); error = qbytes_create_calloc(this._bytes_internal, len);
Since
_bytes_internal
uses the default value from the record definition, I think I can't reference it ininit()
. I could get at it inpostInit()
, but then I can't get the error code back toinit()
'sout error
.
However, I'm going to remove the out
argument discussion in my response—not because it isn't a valid concern, but because I think there's a more core issue to wrestle with first.
(I'd like @noakesmichael and @mppf to think about this post since they've been the strongest advocates of the new proposal, and @lydia-duncan to think about it since she clarified the current "can refer to fields once they're initialized" rule for me and implemented it, and @vasslitvinov to because I think it (somewhat) relates to his concerns about using an object while init()
routines are running).
I've been making two assumptions about the current proposal, which have started to seem shaky this weekend:
The first is that at the time the init()
routine returns, the object is fully initialized, by which I mean (a) its fields all have values and (b) the object is a legal instance of that initializer's static type (call it C
for this post) and can do anything that type can do. Put another way, I've been assuming that while postInit()
can be used to do reasonably arbitrary things with the object, it is not required to make the object into a legal C
. (Note: I wonder whether this is why Vass objects to changing the dynamic type of this
within init()
... because he doesn't think of the object as being a full-fledged C
until after postInit()
returns).
The second assumption (made out of laziness rather than conviction) is that it's sufficient to initialize fields via the assignment operator, as in:
this.x = ...;
But, as the examples in Paul's Buffers and my sync var cases point out, sometimes fields need to be initialized in other ways, for example by passing them to routines. Today, the compiler does not understand such routine-based initializations as being an initialization of the field, so on master, they have to put this in phase 2.
To make that more concrete, consider a type like this:
class C {
var x: int;
var y: special_type_requiring_init_by_routine;
var z: bool
}
Side note: On master today, if I write:
proc C.init() {
this.x = 1;
this.z = true;
initField(this.y);
super.init();
}
the compiler complains that I am passing this
to something during phase 1 (which actually doesn't seem quite accurate since I'm actually pasing this.y
which seems like it should be equivalent to referring to this.y
on the RHS of an initialization expression, which is permitted. I.e., I think this is a bug).
So the way I have write this on master today is:
proc C.init() {
this.x = 1;
this.z = true;
super.init();
initField(this.y);
}
I.e., I have to put it into phase 2 which, in the new proposal, suggests it has to be put into postInit()
. This in turn suggests that the object isn't actually a fully functional C
after init()
returns as I'd hoped, but only is after postInit()
has. Oops.
This suggests to me one of three things:
1) We have to give up on the notion that C.init()
will result in a legal C
by the end of the routine. This would also imply that we could no longer permit method calls on this
or passing this
in init()
routines since it wouldn't be guaranteed to be a legal B
object as I've been positing. This is my least favorite approach as it seems to cripple initializers and require postInit()
routines in most nontrivial cases.
2) We need support for something like initDone()
(issue #8289) in order to set field y
within C
's initializer but after field initialization using a pattern like the following:
```chapel
proc C.init() {
x = 42;
z = false;
this.initDone();
initSpecialType(this.y);
}
```
3) We need the compiler to be smarter than it is on master about handling such cases in phase 1 so that we can write:
```chapel
proc C.init() {
x = 42;
initSpecialType(this.y);
z = false;
// super.init(); would go here on master today
}
```
Two ways that I can think of to make the above work would be: (i) when we see the reference to `this.y` and recognize it as not being a field initializer, we have the compiler insert its `y`s default field initialization just prior to it; or (ii) when we see the field initialization of `z` (or the end of the routine if there were no `z`), we have the compiler insert `y`s omitted field initialization just after `x`'s (as the previous field). In either case, we'd also have to allow `this.y` to be legally passed to routines without equating it with passing `this` to routines (similar to how `this.y` can be used in initialization expressions today).
4) We need something else that hasn't occurred to me.
In the past we had a few cases in our Chapel code base where fields were initialized by a function called from the constructor. Sometimes a single function initialized several fields at once.
This was not out of necessity - it was for convenience. The case I remember indeed sounded compelling. The helper function factored out some computations that naturally were one semantic unit.
While it should be relatively easy to implement this functionality in today's compiler, we need to be careful about the argument intents for such "factor out field init(s)" functions. In particular, we can't use our stock ref
or out
intents because they assume that the actual argument has already been initialized. We need "ref init" and/or "out init" instead. BTW one of these intents could/should be used in our compiler's "return record by ref" conversion.
Or maybe we can find a way to return the initialization value(s) out of a function and assign into the field or a tuple of fields. For example, what would it take to resolve the above situation with this?
this.y = initSpecialType(...);
Initialize-multiple-fields-at-once example:
(this.x, this.y, this.z) = myFactoredOutInit(...); // could also drop 'this'
So, I support field initialization by calling functions, with the new intents. The situation where a field is in an invalid state past the "initDone" point does not make sense to me.
Regarding the problem that @cassella brought up:
...but I couldn't even get through the first one for this core proposal. In master today:
proc bytes.init(len:int(64), out error:syserr) { this.home = here; super.init(); error = qbytes_create_calloc(this._bytes_internal, len);
Since
_bytes_internal
uses the default value from the record definition, I think I can't reference it ininit()
. I could get at it inpostInit()
, but then I can't get the error code back toinit()
'sout error
.
This initializer is simpler than it looks. In particular, the qbytes_create_calloc
call is simply a C function that returns in two places - the normal return slot (returning an error code) and in the first argument (returning by pointer-to-returned-type).
So it would suffice to call qbytes_create_calloc
during Phase 1 like this:
proc bytes.init(len:int(64), out error:syserr) {
var buf:qbytes_ptr_t = QBYTES_PTR_NULL;
error = qbytes_create_calloc(buf, len);
this.home = here;
this._bytes_internal = buf;
super.init();
I think it would be reasonable to allow the qbytes_create_call_calloc
to "initialize" this._bytes_internal if it were to be declared with an out
argument intent for that argument. I'll say more about that in my next comment here.
Regarding @bradcray's concerns:
The second assumption (made out of laziness rather than conviction) is that it's sufficient to initialize fields via assignment, as in:
this.x = ...;
But, as the examples in Paul's Buffers and my sync var cases point out, sometimes fields need to be initialized in other ways, for example by passing them to routines. Today, the compiler does not understand such routine-based initializations as being an initialization of the field, so on master, they have to put this in phase 2.
I don't think initializing fields "via assignment" is reasonable, in general. However we could decide to allow such things. Since "assignment" means run the possibly user-provided =
function, the LHS needs to already have been initialized. That means that if we decided to allow the first operation on a field to be an assignment, the compiler would have to default-initialize the field before running the assignment in order to meet 1) the design goal of initializers not to allow user code to access uninitialized memory and 2) to allow the assignment function to have the reasonable starting point that the LHS is initialized.
If the LHS is not initialized, it's not called assignment, it's called initialization. Now, we could completely disallow field assignment in Phase 1 - allowing only field initialization. I understood this to be the rule for the previous initializers proposal and the current one. The place I know of where that can get tricky is for things like promoted array assignment (or say setting all the fields of an array in a loop).
sometimes fields need to be initialized in other ways, for example by passing them to routines.
Getting back to this point, I can see three reasonable answers:
out
arguments to have the called function initialize the memory instead of having the call site do it. In this way we could make out
argument intent support this case (even for extern functions). out
would be similar to returning a value and so suitable for field initialization. But, we have to answer the challenge of what the language should do if the same actual argument is passed to multiple out
arguments (say). (One possible strategy here is to treat out
arguments in this way only for extern functions).out
/ inout
/ ref
to a function that actually sets the relevant value from the user's point of view, and we allow this to happen in Phase 1. (This is a rewording of the above idea of allowing assignment to set fields in Phase 1). I'm not particularly familiar with what the compiler does about this today, but I see no philisophical problem with having read-write access to a field after it has been initialized in Phase 1. Note that the scenarios I have in mind do not require access to this
or to the whole object.Note that my option 3 here is similar to Brad's "3. We need the compiler to be smarter ..." but I don't think it's strictly necessary for the idea that the compiler know how to add default initialization - it could simply allow 'ref' access to fields that have already been initialized, like this:
proc C.init() {
x = 42;
this.y = zeroSpecialType(); // this is the field initialization
initSpecialType(this.y); // this is ref-access to an already initialized field
z = false;
// super.init(); would go here on master today
}
Anyway, since I think all 3 of these are reasonable options (at least until somebody finds a major issues with all of them...) I don't think this issue presents a serious problem and I remain fairly confident in the general idea of the new initializers proposal.
Summing up - I understand that we're talking about this question here: "But what if I want to initialize a field by passing a pointer to it to a C function?" - and I think there are many ways that we can solve that without making major changes to the new initializers proposal.
We need "ref init" and/or "out init" instead. BTW one of these intents could/should be used in our compiler's "return record by ref" conversion.
I'm not for adding more intents to the language. (I think we already have too many!).
the compiler complains that I am passing this to something during phase 1 (which actually doesn't seem quite accurate since I'm actually pasing this.y which seems like it should be equivalent to referring to this.y on the RHS of an initialization expression, which is permitted. I.e., I think this is a bug).
I agree that this is a bug. I think it was our intention to support passing fields like this, and suspect that we might allow it if the explicit this.
is dropped (but the implementation should still be okay with both).
Sorry to drop this conversation for a few days...
I apologize for the sloppiness in my post about "initializing via assignment" — I should've said "initializing via the assignment operator" and have edited the note to try and fix it in that regard.
I agree with Michael that adding more intents to handle this case seems distasteful. The notion of permitting out
arguments to initialize fields (effectively Michael's option 2) seems related to the deep-dive topic on out
arguments that Michael led in December—have I got that right? Of Michael's three options, I think I find this one most enticing. Option 1 seems like it's likely to cause problems down the road for power users. Option 3 seems acceptable but unfortunate in that it essentially requires double-assignment to initialize the field.
I'm noticing belatedly that Lydia's test/classes/initializers/phase1/sendFieldToFunc.*
tests already capture the pattern of trying to send a field to a function in ways that should be legal but trigger an error, so I'm glad that that's not a new issue (but sad that it's an open one).
The notion of permitting out arguments to initialize fields (effectively Michael's option 2) seems related to the deep-dive topic on out arguments that Michael led in December—have I got that right?
I discussed this topic with you but didn't include it in the deep dive presentation. I still think it's a reasonable direction, though, especially for extern procs.
Hm thinking about it more, whether or not an out
formal accepts an uninitialized vs. initialized variable - that does not matter for the callee function. That is, an out
formal starts out as a default-initialized variable in either case. The caller can make its own determination of whether to assign into the actual or move-initialize the actual. So maybe there is no need for an "out init" intent.
This conversation got more technical than I wanted so I dropped out, but I just got pinged, so let me chime in briefly.
First, from my perspective there is not enough legacy code to justify the effort in supporting whatever the "old" method is.
Second, it's not clear to me what is about to happen. This is common in my life (I'm married with kids). I have gotten used to
class Thingy {
proc init(int: n) {
this.n = n
}
}
And I have seen
class Thingy {
proc Thingy(int: n) {
this.n = n
}
}
For me, the former is more appealing but for no justifiable reason. My question is: Are either of those patterns the intended direction or am I in for something exciting and new, like I'm about to get on the Love Boat?
HI Brian -- The new direction continues with the proc init()
approach, but changes the details of what happens / needs to happen for initializers that need to do more complex things than field initialization. If you had initializers before that called super.init();
within their bodies, they will need to evolve. If they didn't, it's possible that they will just keep working and possible that they will require some more changes to keep working (depending on what they did). We'll work with you offline to keep you apprised of changes as they hit master.
Yes, we are using super.init()
quite a bit, mostly out of habit rather than knowledge. So I expect I'll have questions on that once in a while. Thanks for the heads up!
Since this is implemented and in the release we're about to make, is it time to close the issue?
Yeah, I thought I'd suggested to someone on email that it could be closed after summarizing the conclusions, but it seems like that never happened. I think the summary is here: https://chapel-lang.org/docs/1.17/technotes/initializers.html
A revised definition of initializers has been proposed to address shortcomings in the approach that currently exists on master, as originally noted in issue #8018. This issue is designed to serve as a collection point for feedback on the core of the proposal.
In brief, the proposal is to redefine the
init()
routine to have it take the place of "phase 1" of initialization in the current implementation and to add a new "postInit" routine that takes the place of phase 2. This removes the need forsuper.init()
to serve as the syntactic separator between phase 1 and phase 2 since the two routines will now do so. It also permitssuper.init()
to be called earlier withininit()
such that a parent class's fields can be referred to when initializing a child class's fields. (Moreover, this base proposal requiressuper.init()
to be called before initializing any fields, though issue #8286 considers relaxing this constraint).CHIP 23 describes this proposal in greater detail, and this presentation provides a gentle introduction to it.
Issues #8289 and #8291 describe a few variations on this core proposal that have been proposed in order to make the
init()
and "post-initialization" routines a bit more powerful / flexible. This issue is concerned primarily with the core notion of repurposing init() to initialize parent fields first and act as phase 1, and for introducing a post-init routine to serve as phase 2.TODO: