Closed KimBruce closed 6 years ago
So, we need to fix both the specification and minigrace. Your option (1) seems like the simplest thing. It would also allow, at some future time, for a nested type to be refined, if we decided to do that.
Just a reminder that this is a relatively high priority item. Here is the broken code:
type A = {
m(n:Number) → Number
type B = Number
}
def y:A = object {
method m(n:Number) → Number {2}
type B = Number
}
def x: y.B = 3
def z: A.B = 3 // this causes run-time error
print(x)
Nesting types in objects works fine, but nesting types in types causes a run-time error. We seemed to have agreed the right solution is to make it legal and to have y have type A even if the declaration of type B is not repeated in the object definition. However, it would be an error (at least in a static type checker) if the type was included in y and was not identical.
If there is no objection, I will add this to the spec. Right now the spec in the section "Interfaces and Type Literals" says types can be nested (The subsection NestedTypes adds that types can be nested in other expressions.). The section on type conformance should say something about types in types, so I would add that they must match exactly or be omitted (and thus implicitly included).
I'm also inclined to write better rules for type conformance with variants, by replacing:
(S' <: S) & (T' <: T) ==> (S' | T') <: (S | T)
by
(S <: U) & (T <: U) ==> (S | T) <: U
as the first is derivable from the second and the other rules provided.
Let me know if you object to any of this, otherwise I'll make the changes in the spec tomorrow.
@Kim writes:
We seemed to have agreed the right solution is to make it legal and to have y have type A even if the declaration of type B is not repeated in the object definition.
I don't see where we agreed that. Given that nothing else is implicit, I don't see why acquiring type-methods should be.
I would prefer your option (1), since this would be consistent with our rule that adding a type declaration does not change the meaning of a program — but might cause a program that was previously working to no longer compile.
I don't have a problem with your proposed change to the rules for variants. But shouldn't it be a separate issue? It has nothing to do with types in types, as far as I can see.
OK, no implicit inheritance of a type into an object of that type. The definition must be repeated. (Unfortunately the current bug keeps you from using the one in the type anyway).
On 19/06/2018, at 11:16AM, Kim Bruce notifications@github.com wrote:
OK, no implicit inheritance of a type into an object of that type.
I don't see how that would be possible in Grace as currently conceived. requests have (explicit) types, but objects don't.
The definition must be repeated. (Unfortunately the current bug keeps you from using the one in the type anyway)
I don't know what the semantics of "A.B" is. Now an interface object A supports an additional request B that returns a type? Surely the declaration of B in A means that the objects of type A must contain a type declaration for B (just as a declaration of m() in A means that the objects of type A must contain a method declaration for m())
types-nested-in-objects are one thing types-nested-in-types are very different things
James, who shouldn't be thinking about this right now - J
I should have written, "no implicit inheritance of a type into an expression with that static type".
I expected A.B to be a reference to the body of the type definition B nested inside type A. And yes, "the declaration of B in A means that the objects of type A must contain a type declaration for B". For a brief moment I was suggesting we be more lenient and implicitly provide that declaration in the object. But I now withdraw that suggestion.
@kjx said
types-nested-in-objects are one thing types-nested-in-types are very different things
Yes, indeed they are, very different. But we have agreed that we can have types as attributes of objects, that is, o.B
can be a type, for an appropriate object o
. Then we can ask: what is the type of o
? It seems that the answer must be a type with a method B
that returns a type.
Hence, I don't see how to avoid types nested in types, once we have types nested in objects.
On 19/06/2018, at 15:24PM, Andrew Black notifications@github.com wrote:
It seems that the answer must be a type with a method B that returns a type.
yep.
It probably has to be something like this:
interface A { type B = Number }
means something close to
interface A {
B -> Type
and describes an object doing something like:
object {
def B : Type
where Type is covariant in its argument (which should fall out naturally if we're careful).
(thus that first-class-types paper).
J
I've implemented this in minigrace for interface types, but don't really know how to do it for &
, +
, -
and especially |
types. Here's why.
Notice that a type that contains a nested type, like A
in the original issue,
type A = {
m(n:Number) → Number
type B = Number
}
does is not of type Type. It conforms to type Type, but it has an extra method, B
. (By "extra" I mean in addition to the normal type methods like &
, asString
, methodNames
, typeNames
, etc.) Similarly,
type X = {
p(n:Number) → String
type Y = String
}
will have an extra method Y
.
What about A & X
? It will have to have two extra methods, B
and Y
. Presumably A - X
will have neither B
nor Y
. So the methods that are present on &
and +
types will depend on the values on the receiver and the argument. This sort of dependent type is not something that we can express in Grace.
In minigrace, the type &
and +
operations are expressed in Grace in the standardGrace module. We could perhaps finesse this issue by moving the definition of the objects representing the operator types to intrinsic, and coding them up in the implementation language (JavaScript in the case of minigrace).
As for variant types, this makes me realize that don't understand what they mean as objects, or what operations they have. What are the operations on A | X
?
On 18 Jun 2018, at 17:06 , Kim Bruce <notifications@github.com mailto:notifications@github.com> wrote:
And yes, "the declaration of B in A means that the objects of type A must contain a type declaration for B"
I think that Kim and I are on the wrong track here. If there is an object
def y:A = object {
method m(n:Number) → Number {2}
type B = Number
}
then A must be a type that describes the methods of y
, not one that has the methods of y
.
So A
should be something like
type A = interface {
m(n:Number) → Number
B → Type
}
A
should not have a method B any more than it should have a method m
. What A
has is a method methodNames
that returns a set containing the strings "m(_)" and “B", and some other method that lets one get the signatures of all of the names in that set.
In the case of ordinary methods of y
, like m
, we don’t expect to be able to request that method on A
just because we can request it on y
. Types should be the same. The type A
should tell us only that y
has a method B
that returns a Type
; if we want to know what that type is, then we need to ask y
.
How does this affect static type checking? Not in any significant way, I think. Kim has already made the restriction that B
must be defined in a type declaration, so it’s value will be known statically. Programmers won’t be able to write A.B; they will have to write y.B. As with inheritance, knowing the type of y
won’t be sufficient to know the value of y.B
— the checker will have to look at the definition of y
to get that information.
If, instead, we continue down the path that I started when I made A.B
work for interface types, we will have to answer questions such as: what is the meaning of A & W
when A
is as @kim originally said, and
type W = interface {
n(s:String) → Number
type B = String
}
Then A & W
will have to have a method B
that returns a type. What type? Number & String
? Number + String
? What’s a typical use of it?
I understand the reasoning here, but this approach will result in nested types that are useless for static type checking. The whole idea of static checking is to provide enough information so that at compile time we can guarantee that certain errors cannot occur. Look at the following example from above
type A = interface {
m(n:Number) → Number
B → Type
}
Now lets try to use it:
method m(a:A) {
def x:a.B = 17
// We could also try def x: A.B = 17
}
With this specification we have no idea how to statically type check the declaration of x as all we know is that a.B is a type, but not its value. As a result we don't know statically if 17 conforms to a.B.
Joe and Thuan have been working on such issues and it is clear that for static type checking we need both to keep track that B is a type (the equivalent of B -> Type) and we need to know (statically) what its value is.
Now languages like Beta (and gBeta) and Scala try to loosen the constraints of equality by placing constraints on types, e.g., writing
type B <: {m:Number -> String}
so that we know something about B, but not complete info. However, that makes the type system incredibly hard to work with (and prove correct). I don't see that we want or need to do this for Grace.
Our use case for Grace is importing types from modules. That gives us an object with type components. In that case we generally don't explicitly write the type of the module, but we do built it implicitly (just as we do the object itself). To type check the import:
import "external" as ex
we will need the (implicitly constructed) type of ex to have type components -- thus this is important to statically type checking programs.
I presume the "&" (with possibly conflicting type definitions) will mainly come in by combining traits (most commonly as dialects). In those cases we will generally need the types to be the same. Thus the constraint is that multiple definitions of the same type (in the different arms) must be the same. If not, existing code in the dialects will break. (Examples available upon request.)
This issue might be "new" research. I'm not sure if others have faced these issues. Anyone know?
I understand the need to put types in modules. In such a case, you are accessing the type component of a (module) object, not a type component of a type. So there is no problem. You talk about "type checking the import statement
import "external" as ex
"; I fail to see what type checking needs to happen for that statement. Sure, when you use ex.T
for a type component of T
you will need to keep track of what T
is bound to in ex
. This seems similar to inheriting from ex.y
— you need to keep track of what y
is bound to in "external". This sounds like value propagation, not type checking.
Can you give me a use case for type values in types? In the example you keep on using, A.B
is just Number
, so one could as well write Number
, and everything would be fine.
It seems to me that putting types in objects is useful exactly when — and only when — two different objects, with the same type, have different type values. That is, when I can have two objects v
and w
, with v.T
and w.T
both being types and v.T
≠ w.T
. Then, if v
and w
can both have type A
, one can usefully use A.T
as a type. If A.T
is fixed, then what's the use of it?
I think that Kim and I are on the wrong track here.
Reading the above, I'm certainly confused. As for the tracks, I couldn't possibly comment...
A
should not have a method B any more than it should have a methodm
.
I think I agree here. We've been confusing type metaobjects with objects they describe, and the reflective interface offered by those type metaobjects with the syntax of the static type system.
Thinking rather hard and rereading to try to get this write (each time I look I think I've got it but convince myself I haven't):
Hopefully we can agree that an object that contains a single type attribute that is an alias for Number
:
def a : A = object {
type B = Number
}
should conform to (be described by) a type like this:
type A = interface {
type B = Number
}
What's evil / confusing here is that we're using the same syntax in interface and objects to talk about type attributes, but that syntax must mean different things in different places. There's another asymmetry here that probably makes things worse: we don't use the method
keyword in interfaces, but we've kept the type
keyword.
The question though is: what is the return type of the request a.B
. That type cannot be Number
: because that's the value that the request returns - a metaobject representing the type Number
. Rather I think the return type of a.B
has to be something very like Type[Number]
--- what Java writes as Type<Number>
- the type of the type metaobject Number
in exactly the same as the return type of a method returning 2
is the type Number
.
We can see this by rewriting the two snippets, removing the type
keywords.
Starting with the object first, because it's easier:
def a' : A' = object {
def B : Type[Number] = Number
}
This object a'
(was a'
) accepts a request B
and returns the type metaobject representing the type Number
. Operationally that should be exactly the same behaviour as the first declaration of a
above. To capture the static type denotation, we here give B
the static type Type[Number]
.
As to A'
the interface object itself, I can expand that out to an object with an explicit definition of a match method:
def A' = object {
inherits pattern
method match (other) -> Boolean {
match (other)
case { _ : interface { B } -> (Number == other.B) }
case { _ -> false}
}
}
this will be the same operationally, but I wouldn't expect a static type system to realise A'
is the same as A
! Although of course, I could have just gone halfway --- a static checker should be able to realise that this A''
is the same as the original A
above.
def A'' = interface {
B -> Type[Number]
}
With this specification we have no idea how to statically type check the declaration of x as all we know is that a.B is a type, but not its value.
In Java, we know the type of getClass()
in class Foo is Class<Foo>
. In Grace, the type pf a.B
is `Type[Number]'. All static. All fine.
This issue might be "new" research. I'm not sure if others have faced these issues. Anyone know?
I think Java's had types like Class<Foo>
since it got generics. The Bruno & co First Class Traits paper covers this quite extensively, although that one is recent. I think some of the confusion is that e.g. in Scala, types are nested both within objects and within types. So A.B
in Scala is precisely the type 'Bnested inside type
A`.
Funnily enough, though, perhaps Grace may as well have those type too. The question is epistemology of the claim that in Grace all type attributes are invariant. Operationally, types don't have to be invariant, just as operationally, types don't have to be manifest: Grace's operational semantics are uniform enough that it's clear what should happen in any of these cases. The question is: do we leave that distinction e.g. to a static type system or a dialect, or do we bake it in.
You talk about "type checking the import statement
import "external" as ex
I don't see how you can type check that. You could however typecheck this:
import "external" as ex : EXTERNAL
where EXTERNAL
is the expected type of the module. probably imported from elsewhere. This gets you Modula-2 style separation of interface & implementation. If you want it.
This sounds like value propagation, not type checking.
The problem is that types for things like inherit clauses, modules, etc, are implicit in Grace. We don't have to write the explicit type of the imported module; rather we require the type-checker to partially evaluate the import statement. Were I attempting to write a type checker, I'd start by requiring all annotations to be explicit - including on imports, and objects being used or inherited (like the Trait types from the Bruno paper).
Can you give me a use case for type values in types?
Sure: type abbreviation — or, actually, wherever we have a type of an object with a nested type inside it. If types are really, truly, epistemologically invariant we may as well split the difference and have types nested inside types.
Given
type A = interface {
type B = Number
}
If we do, then any object that confirms to A
must have the exactly the same B
. At which point we may as well write A.B
to denote that type, and, yes, give A
an attribute that returns that type!
Which takes us right back to start. Those tracks are clearly a loop
@Kim started by saying that he thought we should be able to omit nested types from objects, since they can be inferred from the type. I pointed out that this is inconsistent with the rest of Grace, were we infer types from objects, rather than objects from types. But Kim's larger point remains true: there is redundancy here.
@kjx also made this observation.
What I'm saying is that, instead of inferring the object component from the type, we could infer the type component from the object.
The next question is whether A.B
ought to be legal. This notation will cause problems if the name of a nested type clashes with one of the other methods on types, like methodNames
or asString
. @kjx's idea that Type
should be parameterized might work better. The type checker would then ask A.signature "B"
, and the result would be a signature with a result type of Type⟦Number⟧
When I'm asking for use cases, you are missing what I'm asking for. I know that objects containing types will have types that contain types; I was the one who pointed this out! What I've been hearing, though, is that the type in the type must be identical to the type in the object. So I'm asking for a use case in which it helps anything to repeat the type inside the type (and check that it really is equal to the type inside the object).
The larger point that I'm trying to make is that what a type-checker needs to carry around as it does its checking will in general be different from, and more detailed than, what Grace's type syntax allows one to write. A case of this is information about inherited objects — which will need to be a lot more than their interface types, since it will have to include information about confidential methods and about abstract and required methods. As far as I can see, the value of a type component is just one more example of this.
@kjx writes
If types are really, truly, epistemologically invariant we may as well split the difference and have types nested inside types.
I don't understand what this means — neither "epistemologically invariant" nor "split the difference". Can you explain?
So I'm asking for a use case in which it helps anything to repeat the type inside the type (and check that it really is equal to the type inside the object).
the Modula-2 case: I want to import a module, and I want to write down the type of the module I'm about to import (so that my code can detect if it runs against a binding of the module with different types).
The larger point that I'm trying to make is that what a type-checker needs to carry around as it does its checking will in general be different from, and more detailed than, what Grace's type syntax allows one to write.
We can write what we like. The Traits paper wrote something like Trait[Provided, Required]
to represent a trait, but that type was also a subtype of Provided
(especially useful if you had a trait that didn't require anything). We could write something I remember @kim calling a "classtype" as, I dunno,
Trait[Provided, Required, Confidential]
where each of those arguments were interfaces (putting operations in there seems a bit freaky even to me...) thus letting is write down explicit types there as well.
This at least has the implementation advantage of letting a checker separate the checking from the "inference" or "partial evaluation", would be useful for tests, and who knows, may offer other benefits in practice.
A case of this is information about inherited objects — which will need to be a lot more than their interface types, since it will have to include information about confidential methods and about abstract and required methods. As far as I can see, the value of a type component is just one more example of this.
yes, that's absolutely right.
"epistemologically invariant"
whether the only relation between types in Grace types and types in Grace objects is that they are and always will be invariant. This is similar to the question as to whether e.g. a parent in an inherit
or use
clause are epistemologically / essentially manifest, part of the language model --- or just accidentally manifest (suits current implementations).
"split the difference"
if we know that some type A has a 'type member' B, and that every object that conforms to A must always have exactly same B (epistemologically invariant :-) then we may as well permit the syntax A.B
to refer to that type. Which is what Kim wanted in the first place, then we said why we thought it couldn't or shouldn't be done.
Thanks for the explanation.
What I'm saying, with respect to your "Modula 2" example, is that if it is essential that a type B
in an imported module ex
be exactly W
, then why would I want to write ex.B
rather than W
?
The alternative would be to write something like
assert (ex.signature "B") shouldBe (W)
There are three problems with making the interface of a type depend on the number and names of the nested types that it contains.
match
, asString
, etc. In that case, which one wins? Either answer is a problem if you want the other one.&
and +
. Kim's suggestion was that the operator not be defined unless the two nested types with the same name are identical. This would mean that &
and +
are not always defined, so types no longer form a lattice.This conversation encouraged me to go back and look at the design of the ML module system, which does face similar issues (though their module language is more restricted than the full language - modules are not first class). ML allows the user to decide whether the type in a module is transparent to users (using ascription) or totally abstract (opaque). Modula 3 does something similar, but allows “partial revelations”, which disclose some of the methods of a type, but not all. This can be useful when you have mutually recursive types/structures that need full information about the other (e.g., to do operations efficiently), but you only want the world to know some of the operations (sort of like C++ friends).
Both of those make sense when creating ADT’s (like Modula 2), but completely opaque types seem much less relevant to an object-oriented language. If an object has type T, where T is opaque, you can’t send any messages to it — making it pretty useless as you can send it as a parameter, but once you have it — what can you do with it?
At one point I suggested we think about supporting partial revelations in Grace, but they seemed less relevant for our target audience, so I dropped it (I believe with your encouragement).
So what are the use cases for types in types? For me, the main consideration is importing modules, especially in a statically typed dialect (my main focus these days). As we have discussed, an imported module is treated as an object in the importer.
import “MyModule” as mm
What is the type of mm? If the imported module is statically typed, then the types of all the public components are already known and have been checked (and recorded in the generated gct file). It seems plausible that we might group these together into a type MM and associate (either implicitly or explicitly) mm with the type MM.
I presume that most of the time we would want to do it implicitly to keep the overhead small. (This is a much smaller step than leaving off some generic parameters.) However, I could imagine that one might want to apply information hiding and instead label mm with an explicit type that omitted one or more of the public features of mm. I.e., we provide a type MM’ such that MM <: MM’ and write
import “MyModule” as mm: MM’
In this case, uses of mm in the importer would be type-checked under the assumption that it only had the features in MM’.
OK, that is my main use case. How does this impact embedded type definitions?
Let’s make it concrete and suppose I have a module that defines a list of numbers. E.g.,
type NumberList = {
first -> Number
addFirst(n: Number) -> Done
...
}
def emptyList: NumberList is public = …
...
Now I want to use it:
import “ListModule” as lm: LM // LM can be all public features of ListModule or only some or could be implicit.
// we’ll assume here it contains at least emptyList -> NumberList and something about type NumberList
…
lm.emptyList.addFirst(5)
I need to type check the last expression. lm.emptyList is OK since LM contains a declaration of emptyList. However, the result has type NumberList. Assuming that we treat lm the same as other identifiers, we look up NumberList in its type, LM. If all we know about NumberList is that it is a type, then we are out of luck (e.g., if LM says NumberList is a type, but gives no other information about it). On the other hand, if the definition of NumberList in LM is transparent, then we know it has a method addFirst that takes a Number and returns type Done. One could argue that we should just look up lm.NumberList here, and that would work as we know where lm is defined. However, if lm were instead a formal parameter in a method, we would have no idea where the value was coming from and hence what lm.NumberList is (while if the type included the full definition of NumberList, MM.NumberList would be fine).
My understanding is that Andrew is arguing that this is a situation like inheritance where we need to use more than is provided by the type system in order to get the type-checking right. As someone struggling to get type checking working on inheritance, I would rather avoid that when we can simply allow embedded types in types and have imported objects treated exactly like other features.
Andrew, can you provide a use-case where totally opaque types are the way to go? I.e., where the embedded type only has the information that the object has a type named T, but no other information about it. I think I can imagine Module-2 style ADT implementations with opaque types that might be useful, but not so much if written in an OO style.
Kim
On Jun 25, 2018, at 10:18 AM, Andrew Black notifications@github.com wrote:
Thanks for the explanation.
What I'm saying, with respect to your "Modula 2" example, is that if it is essential that a type B in an imported module ex be exactly W, then why would I want to write ex.B rather than W?
The alternative would be to write something like
assert (ex.signature "B") shouldBe (W) There are three problems with making the interface of a type depend on the number and names of the nested types that it contains.
We can't express types like that in Grace The names of the nested type may conflict with the existing method on types, like match, asString, etc. I that case, which one wins? Either answer if a problem if you want the other one We have to decide how these methods are treated when the type containing them are combined with the type combinators like & and +. Kim's suggestion was that the operator not be defined unless the two nested types with the same name are identical. This would mean that & and + are not always defined, so types no longer form a lattice. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/gracelang/language/issues/124#issuecomment-400028702, or mute the thread https://github.com/notifications/unsubscribe-auth/ABuh-ue118yXbgcr_3PC67P1uSmHiRdGks5uARtngaJpZM4M3h6V.
Andrew, can you provide a use-case where totally opaque types are the way to go? I.e., where the embedded type only has the information that the object has a type named T, but no other information about it.
You already have done this yourself — it's when you want ADTs, which work through type abstraction. The module (or other object) mod
defines a "cookie" type mod.T
, and also provides some methods that return mod.T
objects as results, and take mod.T
as arguments.
This seems to me to the the only real use for nested types. In other situations, putting the type in the module is the wrong thing to do. For example, the collections module in minigrace defines type Collection, but unless the user of the module knows exactly what's in this type, they can't make use of Collection objects. So what the new module structure that I devised for SmallGrace does is put all of those type definitions in a trait, so that they can be reused in multiple places.
Of course, as Grace currently stands, the only way to put something in a trait that can be used in multiple places is to put it two levels down — so the types are declared in a trait, which is itself declared in a module. (Look at Grace-Modules/basicDefinitionsBundle.grace, and basicDefinitions.grace). But if, to import that module, I must know or state what the types are that it contains, then there is absolutely no point in having the module define the types!
This seems to me to the the only real use for nested types.
Variants of family polymorphism is arguably the most important use. But all the variants that I like tend to have types in objects not type in types.
(Which again spins us back to e.g. Scala where A.B means something like T st. ∀ o: A, o.B <: T) I think. Can't remember. Could be wrong. Used to understand some of this stuff, have forgotten lots of it.
In addition to the three issues that I listed a fortnight ago, I've just come across another issue when I started writing a test.
- What are the conformance rules for types and objects?
Here is my test:
type T = interface {
x -> Number
type U = String
y -> U
}
method foo -> interface {
x -> Number
type U = String
y -> U
} {
object {
method x {...}
method y {...}
}
}
print(foo)
If you believe that types in objects are represented as methods, then presumably you believe that the above program is type incorrect. Why? Because the object returned from method foo
has no type component U
, and therefore does not have the type specified by the return type annotation on foo
.
Currently, the runtime type checking does not detect this, because what it does is look at the method names in the interface, which includes x
and y
but not U
. U
is there, but in a separate object, where U
is bound to String
.
I can, of course, add U
to the list of methods required for an object to conform to the interface. Is that what you want?
Yes, the object returned by foo must have the definition of type U = String in it to conform to the type T. (Recall I originally proposed implicitly copying that into the object as part of its declaration as being of type T, but backed away from that.)
By the way, we are in the process of becoming more conservative about embedded types in our static type checker. Following through the complexities of embedded types was becoming a huge headache. As a result, our current attempt will only allow types to be defined at the top level of modules. Thus, when we import a module: import “MyModule” as mm you will be able to write mm.T if T is defined in MyModule at the top level, but otherwise no occurrence of embedded types.
I’m not sure this is absolutely necessary, but looking at Odersky’s papers leading up to Scala, it seemed like we were right on the edge of undecidability. I suspect we will be OK with embedded types (assuming there is no variance allowed in extensions), but I’m not sure it is necessary either for the examples we have in mind.
The question then is, are the compelling examples for courses at the intro level that would require true embedded types? There are classic examples like a type Graph that has embedded types Edge and Vertex, but I believe those could just be done with all three types at the top level or by having Graph take type parameters for Edge and Vertex.
Nevertheless, at this point, in order to make progress, we are restricting the language in order to try to get a complete type checker for a language with generics (but likely not bounded generics) this summer. We can always extend it later, but we were just getting ourselves tied up in knots for something that is unlikely to be crucial for intro students.
When should we meet this week?
Kim
On Jul 8, 2018, at 12:30 PM, Andrew Black notifications@github.com wrote:
In addition to the three issues that I listed a fortnight ago https://github.com/gracelang/language/issues/124#issuecomment-400028702, I've just come across another issue when I started writing a test.
What are the conformance rules for types and objects? Here is my test:
type T = interface { x -> Number type U = String y -> U }
method foo -> interface { x -> Number type U = String y -> U } { object { method x {...} method y {...} } }
print(foo) If you believe that types in objects are represented as methods, then presumably you believe that the above program is type incorrect. Why? Because the object returned from method foo has not type component U, and therefore does not have the type specified by the return type annotation on foo.
Currently, the runtime type checking does not detect this, because what it does is look at the method names in the interface, which includes x and y but not U. U is there, but in a separate object, where U is bound to String.
I can, of course, add U to the list of method required for an object to conform to the interface. Is that what you want?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/gracelang/language/issues/124#issuecomment-403310810, or mute the thread https://github.com/notifications/unsubscribe-auth/ABuh-sXRaKyIO4_s3UXxu-UeUyVbUbOkks5uEl3OgaJpZM4M3h6V.
I'm not clear where we are going with this issue.
It does seem clear to me that if we have an object defined like this
def o: A = object {
method m (n:Number) → Number {n + 1}
type B = Number
}
then we can use o.B
as a type, e.g., in
def y:o.B = 12
It also seems clear that o
's interface describes methods m
and B
. Something like
type o'sInterface1 = interface {
m (n:Number) → Number
B → Type
}
or, maybe, as @kjx has suggested,
type o'sInterface2 = interface {
m (n:Number) → Number
B → Type ⟦Number⟧
}
(although I'm not sure why type Type
should have a type parameter, since it does not depend on that parameter).
Both of these are quite different from giving o'sInterface
a method called B
. Currently, types have methods like methodNames → Sequence⟦String⟧
and methodNamed(name:String) → Signature
... I'm not sure if we have ever fully specified type Type
.
So my proposed resolution is to put enough information in methodNamed "B"
to keep writers of type checkers happy, but not to give types additional methods.
I've just re-read all of this issue, and I'm none the wiser. The discussion is wide-ranging and interesting, but I'm not clear where it leads.
Currently, minigrace puts an additional method in a type when that type declares a nested type. This is dependent typing: the type of a type depends on the value of that type. I think that this should be retracted. Minigrace does not know what to do with those extra methods when computing A & B
or A + B
.
The spec currently says that
type Type = interface {
matches (o:Unknown) → Boolean
& (other:Type) → Type
| (other:Type) → Type
+ (other:Type) → Type
:> (other:Type) → Boolean
<: (other:Type) → Boolean
== (other:Type) → Boolean
≠ (other:Type) → Boolean
hash → Number
asString → String
asDebugString → String
interfaces → Sequence⟦Interface⟧
}
@kjx has suggested that Type
have a type parameter; I'm not clear why, since it does not use one. In other words, given the above definition, Type⟦Unknown⟧
and Type⟦Number⟧
would be identical. Should the above definition of Type
be changed? If so, to what?
(OK, I want to add a method A :=: B
defined as (A <: B) && (A :> B)
, but's that's a tiny augmentation compared to what's here.)
Commenting on my own comment:
Should the above definition of
Type
be changed? If so, to what?
Perhaps Type
should have an additional method myType
, so Type⟦T⟧
should be the above interface with the additional method
myType → T
?
I'm just about ready to get rid of nested types altogether and just allow type definitions at the top level of modules. They can be declared confidential if we don't want to export them, but I don't think nesting types in other types or objects to hide information is valuable enough to warrant all the complexities. [I realize this is the opposite of what I have been arguing for years, but working on the implementation made me realize how complex this all is.]
Based on the discussion in #164, I'm closing this issue. Type can't be declared inside types, only inside objects (and thus modules). An object containing a method declaration method a -> A
and a type declaration type U = Expr
has an interface that contains method signatures a -> A
and U -> Type⟦Expr⟧
, but does not itself have methods a
or U
.
I was surprised to find out that we can nest types in objects, but not types in types (at least not in a way that they can be used). That is, the following gives an error:
(Interestingly the error does not show up in the type definition, but in its attempted use in the def x statement.) On the other hand, the following is fine (with the same definition of A).
While I found this discrepancy in minigrace, it corresponds to what I found in the language definition (though the language definition doesn't mention embedding type definitions in other type definitions). I would like to see us be consistent and allow type definitions nested in types.
We have a couple of choices as to how to handle this (esp. in a statically typed dialect).
(1) If, as above, a type A includes an embedded definition of type B then any object of type A must also include the same type definition.
(2) We could allow some implicit type definitions in an object by declaring an object implements a type. Suppose we have the definitions of A and B as above. We could decide to allow the following:
where
o'
implicitly picks up the definition ofB
from the typeA
.I understand that there might be an argument for allowing more flexibility with a dynamically typed dialect in the language. However, I'm not sure how one would deal with a situation where the type and object have conflicting definitions (e.g. in the type B is Number, while in the object o:A, the type of B is String), however I'm not sure how that would be resolved dynamically except by just making sure at runtime they are the same type.