jckarter / clay

The Clay programming language
http://claylabs.com/clay
Other
404 stars 34 forks source link

destructor of record in record is not called #334

Closed stepancheg closed 12 years ago

stepancheg commented 12 years ago
import printer.*;

record Inner ();

overload Inner() --> returned: Inner {
    println("construct inner");
}

overload destroy(rec: Inner) {
    println("destroy inner");
}

record Outer (rec: Inner);

main() {
    Outer();
}

prints

construct inner

Record Inner has custom destructor and it is not called.

ghost commented 12 years ago

Your custom destructor for Inner isn't called as the Outer record evaluates as a POD type (all members evaluate as POD types). If you want your custom destructor to work you would have to define it as a non-regular record e.g. overload RegularRecord?(#Outer) = false; or a destroyed type e.g. overload NonDestroyedType?(#Outer) = false;

stepancheg commented 12 years ago

So current behavior is correct? Or you are suggesting a workaround?

ghost commented 12 years ago

Current behaviour is correct as far as i can tell. No need to call destroy() by default if it isn't required.

stepancheg commented 12 years ago

Even if all record fields are POD, destroy still can be required (example: RawFile, SharedPointer).

Weirdest thing is that destroy for "Inner" is called if "Inner" is stored in variable:

import printer.*;

record Inner ();

overload Inner() --> returned: Inner {
    println("construct inner");
}

overload destroy(rec: Inner) {
    println("destroy inner");
}

main() {
    Inner();
}

prints

construct inner
destroy inner

These kinds of errors are very hard to spot.

If this is really expected behavior (that is counter-intuitive), then compiler should at least issue a warning.

ghost commented 12 years ago

Nothing wrong with your example, you've explicitly defined the destroy overload, so it takes precedence over the regular record destroy overload (which is only called for non-POD records).

In Clay both RawFile and SharedPointer are defined as non-regular records and have destroy overloads defined in their respective modules.

stepancheg commented 12 years ago

You are explaining why it works how it works. I got it now, thanks. I'm saying that the way it works is overcomplicated, so developers (like me) can easily make the error while implementing own types similar to RawFile or SharedPointer.

Note, neither RegularRecord? nor NonDestroyedType? are described in language reference. I found current behavior after long debugging.

ghost commented 12 years ago

I guess over-complication is an issue for the language designers. At this stage i'm using Clay to learn about compiler design, i've really no idea if there is going to be any more serious development of the language , I hope so, but . . . who knows.

Unfortunately the docs are incomplete and out-of-date. However, there are a lot of changes happening at this early stage so it may be a while before the docs catch-up. I've found there's usually enough examples in the lib to figure out how most things work though.

stepancheg commented 12 years ago

BTW, problem seems to be solvable like this:

destroyIfDefined(v) {}
[V when CallDefined?(V, destroy)
overload destroyIfDefined(v:V) = ..destroy(v);

Will this work?

jckarter commented 12 years ago

You need to overload RegularRecord?(static Inner) = false; to disable the default value semantics for Inner when overloading destroy(x). I don't think your proposed fix will work. A better general improvement to the language would be to disallow overlapping overloads, so you get an error when the RegularRecord destructor and your custom destructor both match for your type.

stepancheg commented 12 years ago

I like the idea of prohibiting overloads. How would it work?

I think about final keyword.

final keyword

Function overloads can be annotated with final keyword. Like this:

final overload foo(..args) { ... }

final works when compiler resolves call foo(args):

  1. Compiler finds symbol foo and computes a list of overload
  2. Compiler finds matching overload from the list preferring latter overloads (as it does now)
  3. Then compiler looks for overloads that are 1) declared before matched overload 2) have final keyword. If original call matches any of these overloads, resolution fails.

So:

define foo;
final overload foo(x) = 1;
[S when String?(S)]
overload foo(x: S) = 2;

foo(1) // good
foo("a") // compile-time error

how final helps with destroy

destroy function in core.operators.pod should be annotated with final keyword.

This won't fire compile-time error by default, because nobody's calling destroy for Inner. But core.operators.pod module can induce compile-time error on Outer:

// core.operators.pod
[P when NotDestroyedType?(P)]
forceinline final overload destroy(x: P) {
    // this is either nop in runtime or compile-time error
    CheckCanBeDestroyedLikePod(#P);
}

// core.records
[R when Record?(R)]
overload CheckCanBeDestroyedLikePod(#R) {
    ..for (type in ..RecordFieldTypes(R)) {
        // this will be compile-time error if R=Outer
        // because call destroy(Inner) is compile-time error
        CallDefined?(destroy, type);
    }
}

other possible applications

final keyword could help diagnose problems like this.

jckarter commented 12 years ago

I think it would be better if final were default, and overriding was explicit.

jckarter commented 12 years ago

Closing this ticket, since final overload work is underway by @agemogolk