Open slightknack opened 3 years ago
Idris provides inline syntactic sugar for changing one field of a larger record: recordName {fieldName = newValue}
and recordName {fieldName $= functionToApply}
both return a new record, with one field either mapped or replaced. I see no reason why Passerine can't generalise that idea to other types, providing lens-like syntactic sugar. I haven't fully considered if it could work for sum types, but I have a hunch it can. Using some arbitrary symbols >>>
and $
for operators, it could be that Just 1 >>> Just ($ pred)
would evaluate to Just 0
, and ("foo", "bar", "baz") >>> (_, "quux", _)
would evaluate to ("foo", "quux", "baz")
. The syntax could be linked to local mutability. pair = (_, $ (+1))
would set pair
to itself with its second component incremented. An annoyance of Haskell is when it asks you to destructure a value to change something, then build it back up again, repeating the same constructors each time.
Thanks for that great idea! Here are my thoughts on it:
In Passerine, we use the ::
as the indexing operator. Thus:
record_name::field_name = new_value
As tuples are just records indexed by position rather than by name:
pair::1 = pair::1 + 1
-- or perhaps `+= 1`
I think a generalized syntax to modify ADTs without having to mutate them is a great idea. I'm not entirely sure how we should do this - we could go with either a macro, an operator, or a function. Here's how a macro could work:
tuple = ("foo", "bar", "baz")
new_tuple = tuple but { tuple::1 = "quux" }
where x 'but y
makes a copy of x
and applies the set of transformations in y
to it:
syntax adt 'but transform {
new = adt -> { transform; adt }
new adt
}
This, of course, is just one option of many. Another option is to expose these ADT mutations as functions. Say we want to construct a tuple with a modified second field:
-- this is a built-in associated function of the built-in `Tuple` type
-- that takes a tuple, a field, and a new value
-- and produces a new tuple with the field replaced
Tuple::mutate_field = tuple field value -> ...
x = (true, false, 7)
x::1 = true
-- this is sugar for
x = Tuple::mutate_field x 1 true
-- and can be used manually:
new_x = Tuple::mutate_field ...
This is honestly a bit cumbersome.
I think your solution, to introduce a generalized syntax, also works well. We'd just have to figure out the exact semantics in all cases, and how it relates to pattern matching. Perhaps we could define an operator like =
that instead of assigning creates a new adt with that part modified, say pattern ~ expression
(idk, just spitballing here), so then:
record = { foo: true, bar: false }
-- no-go, mutates record:
record::bar = true
-- yes-go, creates a new record:
new_record = record::bar ~ true
Again, a lot of options. My favorite one so far is probably the but
macro. One important thing to think about is modifying records by function, like with the $=
you originally brought up. What are your thoughts?
There's four different ways you might want to change a data structure: you might replace a field with a new value, or you might pass its old value through a function to get the new one, and then for both of those you can either return a new data structure or mutate it in-place. There should be similar syntaxes for all four, and ideally those syntaxes would be like how you read a value as well. Field replacement is a special case of passing a field through a function (because you can use a function that returns the same thing no matter its argument) but it's important enough that it should be easy.
As I see it, you want composability for sure. Lenses let you say "in the 'name' field of the fourth item of the first item of my data, read/change that." The problem is mutability, which lenses provide no inspiration for: only one level of that nested structure should ever mutate at once. Maybe you wanted to edit the topmost tuple by generating a new edited version of what was inside it. Maybe you only want to mutate the name string itself.
I say only one level should mutate at once, but a way to combine changes is also necessary. If a programmer want to make two changes to different parts of that 'name' field, she shouldn't need to specify where it is within the two tuples each time — that would be redundant. I like the but
macro for merging changes, although it would need some work to make sure it behaves well with mutation and satisfies my next point.
Lastly, there's something to be said for ease of nesting. Pattern matching is so beloved by functional programmers because it takes something sequential and imperative — accessing nested parts of a tree structure — and makes it declarative — describing a tree structure with holes. Gone are the days where we only index everything in an array: we create recursive structures. Getter and setter functions are chained in a sequence, yet getters are better as recursive pattern matching (internal rhyme not intended). Setters are marked for the same fate as getters: replacement by something declarative. That's my thought process behind copying pattern matching for the various ways to set things.
Granted, we'd still make short chains like tuple::1::2
for either getting or setting, but a pattern-matching syntax would be a powerful tool for when things get nested.
Note: I originally wrote a quick response but upon deliberating over it some more I think I've reached a better conclusion.
Before I move on to the meat of my ideas regarding the points you brought up, I'm just going to put the 'ways you can mutate things' into a little list:
By the way, all that local mutation means is that the lifetimes of variables that are mutated can't exceed the scope in which they are defined; in other words, any variables that cross function boundaries must cross boundaries as owned values, and can not exist inside reference cells. With that out of the way:
First thing to note is that mutating a value is the same thing as creating a new value and overwriting the old. Of course, this doesn't mean that the implementation need to make a copy of the value (vaporization, when fully implemented, accounts for this with reuse analysis), because we can infer that value
is being overwritten:
-- mutate a value:
value = make_new_value value
We could also introduce a macro that expands to this statement. This cuts away half of our table; realistically, we only need to define a way to create a new value from an old one; the mutation can be handled with reassignment. So the question becomes, how do we:
As you succinctly pointed out, 1 is a special case of 2; we can always make a function that returns a value no matter what, e.g. _ -> 42
. So, we need to realistically specify two things:
What we basically want is pattern matching for selective construction:
Lastly, there's something to be said for ease of nesting. Pattern matching is so beloved by functional programmers because it takes something sequential and imperative — accessing nested parts of a tree structure — and makes it declarative — describing a tree structure with holes.
I think it's possible to 'extend' pattern matching to support selective construction. Like how pattern matching 'describ[es] a tree structure with holes', if we 'describing a tree structure where the holes, instead of being variables to bind, are ways to update values', we can selectively construct new values.
We'll call these patterns where the holes are update rules Construction Patterns. In other words, a construction pattern p
is a syntax to define a function f
that when applied to a value v
(which matches p
) produces a new value v'
, that is updated according to the update rules in p
. This may seem a little abstract at the moment, but I'll provide some potential examples as to how construction patterns should work.
First, consider we define a struct named Person
, with a couple of fields. We'll make bob
:
type Person {
name: String,
age: Natural,
hungry: Boolean,
}
-- make bob
bob = Person { name: "Robert", age: 24, hungry: true }
Here's what pattern matching on the Person
bob
for age looks like:
-- `bob_age`, in a sense, is the hole. The other fields are ignored, I'm just using `..` for this.
Person { age: bob_age, .. } = bob
Let's introduce a construction pattern:
-- Instead of holes, we must provide functions to apply to each hole.
-- Here, we'll use the `update` keyword to make one, but this could be changed in the future.
-- Remember that construction patterns are a way to define a function, so `birthday` is a function.
-- Note that it mirrors the pattern used in the above example.
birthday = update Person { age: n -> n + 1, .. }
-- we apply the pattern `bob` has a birthday:
bob = birthday bob
After this, bob::age
is 25. What's neat is that these functions are composable:
-- a `Person` can eat a birthday cake, in which case they are full:
-- (instead of writing out `_ -> true`, `true` could just be sugar for this)
-- (additionally, we could use variable, we don't have to know the value right now)
eat_cake = update Person { hungry: _ -> false, .. }
-- we can then compose birthday and eat cake to make a birthday party:
birthday_party = person -> person.birthday.eat_cake
-- a year passes, and `bob` has a birthday party:
bob = birthday_party bob
bob
is now 26
years old and is no longer hungry. Of course, we could introduce a syntax for mutating a value via a function:
syntax value 'after function { value = function value }
-- `bob` is now `27` and still not hungry.
bob after birthday_party
-- this syntax could be used directly with a construction pattern:
bob after update Person { name: "Bob", .. }
So, to recap:
A construction pattern is similar to a normal pattern, where instead of defining holes, we provide a function to change each hole. These functions can be composed.
And we can now fill in the two-way table:
new value | mutate in place | |
---|---|---|
no function | baby_bob = bob . update Person { age: 0, .. } |
bob = bob . update Person { age: 0, .. } |
with function | older_bob = old . update Person { age: n -> n + 1, .. } |
bob = bob . update Person { age: n -> n + 1 } |
Of course, we could use the after
syntax for the mutation column. (Remember that value . function == function value
).
but a pattern-matching syntax would be a powerful tool for when things get nested.
I think the idea of Construction Patterns scratch that itch - we'd have to work out the exact semantics, but I think I like this skeleton. What do you think @IFcoltransG?
This matches what I had hoped for, if you'll excuse the pun. The plan is set out logically and workably. The only uncertainty I have is the idiom for combining two records together. 'Construction Patterns' seem perfect for changing one or two values at a time, and maybe they'd be the best syntax for merging various fields of two records: you'd specify which fields of one record to take, and fill the rest with the other record. In terms of homoiconicity, could there be some relation between the patterns for destructuring records and ones for constructing them?
The only uncertainty I have is the idiom for combining two records together. 'Construction Patterns' seem perfect for changing one or two values at a time, and maybe they'd be the best syntax for merging various fields of two records: you'd specify which fields of one record to take, and full the rest with the other record.
Here's what I'm imagining:
bob = Person { name: "Bob", age: 17, hungry: false }
alice = Person { name: "Alice", age: 77, hungry: true }
-- specify the field to take from bob, fill the rest from alice
john = alice . update Person { age: bob::age, .. }
This could be useful, say if you have defaults:
Person::default = Person { name: "", age: 0, hungry: false }
joe = Person::default . update Person { name: "Joe", .. }
And again, the keyword update
is just one option. We could use with
or yet
or but
, etc.
In terms of homoiconicity, could there be some relation between the patterns for destructuring records and ones for constructing them?
Anyway, I'm still thinking about homoiconicity. The general idea is as follows: A pattern, in general, takes the form:
Label X -- type
(X, ..) -- tuple
{ field: X, .. } -- record
{ Variant X, .. } -- enum
X -> X -- function
-- etc.
For normal construction, X
must be an expression. For pattern matching, X
must be a pattern. For pattern construction, X
must be a function.
Actually, on the topic of functions in pattern constructions: perhaps we can make the argument implicit:
-- instead of this
bob . update Person { age: n -> n + 1, .. }
-- use this
bob . update Person { age: age + 1, .. }
I think that's a lot nicer - it also disambiguates fields that are functions. For pattern construction, X
must be an expression. How the parser deals with these different modes is pretty simple:
Does this make sense?
Anyway, I think we need a more general vocabulary for describing patterns.
If we return to one of the example I gave above, "in the 'name' field of the fourth item of the first item of my data, read/change that." For a start, tuple construction patterns haven't been nailed down yet, but I presume it could be something like pair . update (_, new_value)
or five_tuple . update (.., new_fourth_part, _)
. That works for values, though since it isn't at all obvious how tuple construction patterns would work with implicit functions, I'll pretend the example was more like data::one::four::name
for some nested records.
For reading a value, the pattern match would be simple enough: {one: {four: {name: my_val}}}
(possibly with some ..
if records only match when they don't have any extra fields).
Compare that with a construction pattern: {one: one . update {four: four . update {name: my_val, ..}, ..}, ..}
. Rather redundant. The implicit lambda is a nice trick, but it works best when your function is a lambda.
One could argue there should be a separate syntax for pointfree updating with a function, to simplify the field_name: field-name . operation
, unless you place a lot of trust in ol' Hindley and Milner for differentiating terms and functions. In fact, one could argue there should be another syntax for applying a constructor pattern within a constructor pattern, to avoid several nested update
s.
Oh, and looking at your general pattern form, the website uses semicolons for enums, because otherwise a record would be ambiguous with an enum full of 'is' patterns.
One could argue there should be a separate syntax for pointfree updating with a function, to simplify the field_name: field-name . operation, unless you place a lot of trust in ol' Hindley and Milner for differentiating terms and functions.
So we're facing a bit of a dilemma:
field: field . update ...
nested fields. Additionally, if we already have a function that updates as intended, we have to write field: function field
, rather than just field: function
.update
nested fields - we just write out each field as a construction pattern. So, in the context of the example you gave:
-- pattern matching
{one: {four: {name: my_val}}}
-- pattern construction
update {one: {four: {name: _ -> my_val}}}
we no longer have to field . function
nested fields. However, this means all updates must be lambdas (e.g. _ -> new_value
). As an aside, we could always introduce syntax that reduces value <new_value>
to _ -> <new_value>
.
I think option number 2 is the way to go. We can formalize it as follows:
If a field
a::x
has typeT
, in a construction pattern, the associated hole must have typeT -> T
. Thus, for instance, ifT: A -> B
, the resulting update type must be(A -> B) -> (A -> B)
.
This is nice because it means we can nest construction patterns until we reach a terminal lambda. With the introduction of value x
as syntactic sugar for _ -> x
(which I think could be implemented as a macro, depending), the above example becomes:
-- note the use of `value`
update {one: {four: {name: value my_val}}}
Oh, and looking at your general pattern form, the website uses semicolons for enums, because otherwise a record would be ambiguous with an enum full of 'is' patterns.
This is a bit of an aside, but I think if we:
The use of a comma isn't ambiguous.
For a start, tuple construction patterns haven't been nailed down yet, but I presume it could be something like
pair . update (_, new_value)
orfive_tuple . update (.., new_fourth_part, _)
.
With the new semantics above and the removal of implicit lambdas, we can make it pair . update (_, value new_value)
, and then for functions, pair . update (_, function)
.
I realize we have to make some tradeoffs here, and there is more discussion to be had. We have the Weekly meeting today at 14 UTC if you want to hop in. What are your thoughts?
For implicit lambda syntax, you could replace the :
with a ->
. Or if {age -> age - 1, name: value "Calvin"}
is too misleading, any other arrow shape would work.
The value
macro can be a regular function. It looks the same as Haskell's const :: a -> b -> a
.
I just want to say thankyou @slightknack for making this such a joy to read! I adore your prose 😍 ever thought of putting together a blog?
@IFcoltransG: Oh gosh, I thought I had written a response, but I guess I didn't send it. I forgot what I wrote specifically, but here's a loose summary:
For implicit lambda syntax, you could replace the : with a ->. Or if {age -> age - 1, name: value "Calvin"} is too misleading, any other arrow shape would work.
Until we've implemented construction patterns, I think what we have now; i.e. basically just normal patterns delimited with update
where the argument is a function, I don't think we should introduce any additional syntax. After we've implemented it, if it becomes apparent that this is a much needed remedy, we can discuss this further.
I don't expect the additional arrow shorthand to be strictly necessary because I expect transformations to primarily be named functions; in other words, instead of writing { age -> age + 1, .. }
, you'd do ` { age: Natural::increment, .. }. Of course, for such a trivial example, this is unlikely, but in more complex cases where transformations have already been defined, I'd imagine this to be the idiomatic solution.
The value macro can be a regular function. It looks the same as Haskell's const :: a -> b -> a.
This is a great point! I didn't even realize this, thanks for bringing it up. In Passerine, this would be written as:
value = a -> b -> a
-- alt. idiomatically
value = a _ -> a
And could be included in the standard prelude.
I believe the reason I didn't respond initially is because I had started elaborating on another topic, then never got around to sending the response. If I remember what other topic I addressed, I'll write out another response with what I remember.
I just want to say thankyou @slightknack for making this such a joy to read! I adore your prose 😍 ever thought of putting together a blog?
I'm glad you enjoy my writing! I used to keep a blog, but never got around to finishing anything; it was more a loose collection of notes than anything else. There's an archive of it here: https://website.slightknack.workers.dev/writings. The blog was hosted at https://slightknack.dev/, but I've changed that to a placeholder page for now. I plan to flesh it out and add a blog to it at some point.
I expect transformations to primarily be named functions; in other words, instead of writing
{ age -> age + 1, .. }
, you'd do{ age: Natural::increment, .. }
So the typechecker would implicitly decide whether it's a Natural
or Natural -> Natural
?
Perhaps it's a contrived example, but what if your record held a Natural -> Natural
instead, and you tried to set it using id = a -> a
, { func: id }
? Either interpretation is plausible, but they would behave differently.
This is not ambiguous, and can be resolved using standard judgement rules:
If we know the type of age
is A
, we know that the function must be an A -> A
; likewise, if we know the function to be an A -> A
, age must be A
. First, let's define Natural
and `Natural::increment, an associated method on natural:
type Natural ...
-- built in type
Natural::increment = x: Natural -> x + 1
-- this is a function
-- type signature is Natural -> Natural
As we can see, natural has a defined type, because it is an associated value. Next, we'll define incr_age
which increment's the age
field on the struct using Natural::increment
:
incr_age = update { age: Natural::increment, .. }
-- age is of type A
-- Natural is of type Natural -> Natural
-- using the judgement rule,
-- function is A -> A iff field is A
-- Natural -> Natural is A -> A, therefore A is Natural
-- therefore age is Natural
What's important to note is that because we know the type of Natural::increment
, we can deduce that the age
field must be a Natural
. Now, here's the twist: let's define a person
whose age
is not a Natural
but a Natural -> Natural
. Can we apply the incr_age
update rule?
person = { age: ... }
-- ... is a function of type Natural -> Natural
person = incr_age person
-- this is a comp-time error
-- because Natural -> Natural != Natural
No! We know that the age
field must be a Natural
, so this is a type error! Ok, let's not define the identity function and make a new rule that updates a struct's age
field using it. This will be aptly named nothing
:
id = a -> a
-- identity function
-- type forall A A -> A
nothing = update { age: id }
-- age is type B, id is an B -> B
From this, we have a generalized update rule. If we try to apply it to our previously defined person
, there is no error:
person = nothing person
-- person::age is a Natural -> Natural
-- age is type B, therefore B is type Natural -> Natural
-- function is B -> B iff field is B
-- therefore, id is type B -> B is (Natural -> Natural) -> (Natural -> Natural)
-- in this case.
As explained, the reason this works is because we can infer the update rule to be of type (Natural -> Natural) -> (Natural -> Natural)
in this case. Note that id
is generic: if we were to define another_person
whose age is truly a Natural
, we could apply the nothing
update rule no sweat:
another_person = { age: 7 }
another_person = nothing another_person
-- another_person::age is a Natural
-- age is type B, therefore B is type Natural
-- function is B -> B iff field is B
-- therefore, id is type B -> B is Natural -> Natural
-- in this case.
In short, because we have well defined judgements for the types of update rules, we can infer the types as needed. Does that make sense?
(To reiterate, the judgement is:
Within a construction pattern, If a field has type A, the update rule must have type A -> A.)
nothing = update { age: id }
I like that this looks like it could feasibly be a function with a signature like forall A (Record<for field in A { A -> A }> -> A -> A
. Even if it's not implemented that way, this syntax is already rather brilliant
I'm unsure if that solves the syntax problem. Let's say I have a record with a foo
field of type A. An update function for it might look like update { foo: bar }
. There are two scenarios here: bar
is of type A, or bar
is A -> A. If bar
is A, then you'd replace the foo
field with bar
. If bar
is A -> A, you'd pass foo
's old value through it, and set foo
to that.
If A
is Natural, then this is easy: update { foo: 1 }
is the former syntax, and update { foo: Natural::increment }
means the latter. There is no way for Passerine to misinterpret that. Natural and Natural -> Natural are easy to tell apart.
Let's say instead that A
is Natural -> Natural. Let's also say that the bar
variable has type T -> T for any type T. (The only function of this type is the identity function.)
update { foo: bar }
can certainly be the nothing
transformation you mentioned, because (Natural -> Natural) -> Natural -> Natural can be unified with T -> T, where T = Natural -> Natural. In that case, the update
will do nothing to foo
.
However, we can also imagine treating update { foo: bar }
as the other case. If we want to replace foo
's value, we need bar
to be a value of type Natural -> Natural. T -> T unifies with that too, where T = Natural. So it's entirely reasonably for Passerine to follow that line of reasoning, and replace foo
with bar
. The foo
field would be set to the identity function.
These are two different behaviours. A user could supply the identity function hoping not to change their incrementer, then find it has replaced itself with the identity and become useless. Another user could supply the identity function because they don't want their foo
to keep incrementing things, and become confused when their record remains unchanged.
Thanks for your detailed response!
If
A
is Natural, then this is easy:update { foo: 1 }
is the former syntax, andupdate { foo: Natural::increment }
means the latter. There is no way for Passerine to misinterpret that. Natural and Natural -> Natural are easy to tell apart.
if foo is A
, the hole to foo, i.e. update { foo: <hole> }
must be an A -> A
. if we want to use a value, like one, we have to wrap the function with value
(value = a _ -> a
) to turn the raw A
into an A -> A
, if that makes sense. Does this resolve the ambiguity, or am I missing the point?
These are two different behaviours. A user could supply the identity function hoping not to change their incrementer, then find it has replaced itself with the identity and become useless.
The identity function would always produce no change. To replace the function with the identity function, you'd have to do value (a -> a)
, or, lambda-reducing, _ -> (a -> a)
, if that makes sense.
I think I understand: you've removed the update { foo: foo + 1 }
syntax for non-pointfree updates. That code would now have to look something like update { foo: foo -> foo + 1 }
. Three foo
s is more boilerplate, but that's probably tolerable. It's only one more name than variable = variable + 1
.
There are a few reasons to name your parameters. Pointful style is closer to how an imperative programmer thinks, and it avoids the mess of combinators that come from chaining lots of functions pointfree. It's a matter of style, after all. So suppose you're writing pointful code:
value = value + 1
record::value = record::value + 1
record = record . update { value : value -> value + 1 }
The update
looks a lot longer than the other lines, and loses some of its declarative directness.
Using =
for implicit lambda syntax might improve the look of that update
line:
record = record . update { value = value + 1 }
Much clearer. Thoughts?
I think I understand: you've removed the
update { foo: foo + 1 }
syntax for non-pointfree updates. That code would now have to look something likeupdate { foo: foo -> foo + 1 }
. Threefoo
s is more boilerplate, but that's probably tolerable. It's only one more name thanvariable = variable + 1
.
If we were to introduce a syntax to turn operators into functions, say, something like 1 + 2
becomes (+) 1 2
, then { foo: foo -> foo + 1 }
would just be { foo: (+) 1 }
or the like.
The
update
looks a lot longer than the other lines, and loses some of its declarative directness.
Yeah, that's true. Even with the old style, I'm not sure you'd write it out inline though:
record = {
value: 7,
name: "Jerry",
flipped: true,
}
record::value = record::value + 1
record = record.update {
value: value -> value + 1,
..
}
It's not ideal, for sure.
Using = for implicit lambda syntax might improve the look of that update line:
This is pretty nice! it reminds me of the but
macro we discussed earlier:
record = record but {
record::value = record::value + 1
}
Of course, I'd imagine we should have things like +=
etc for in-place assignment.
Support a system like the following:
The above related to contemporary language:
Following can be supported in the type system:
Also ability to freely define sub and super types.
Hey @sirinath, I appreciate the work you put into describing a list of typing rules and ideas!
I note that you introduce some additional syntax, like def
and @
, without an explicit explanation as to how they work and why they are needed. It looks like the meaning can be derived by context, with def
marking a field and @
marking a name to check, but it's not entirely clear. Would you mind expanding on this?
You also introduce the notion of sub/super typing via :>
/<:
, again, without a rationale as to how this would work or the specific semantics. If you wouldn't mind, could you explain your reasoning behind introducing this to the language? Does it work as an alternative to the trait system we have been deliberating? What requirements must be met by A
and B
?
ability to specify if type name is significant or not: ...
This is extremely fairly fine-grained control, and not something we should expose to the user directly: the current model of all types being structural unless wrapped in an outer Label
is fairly easy to explain and allows for enough fine-grained control.
Thanks!
I have:
type
for types definitionsdef
for value/variable definitions.@
is used to denote of the name is significant hence used for type checking. Say we have type A; def a: @A
a
type checks as nominative.
Also,
{}
record - order of fields not significant()
tuple - order of fields significantIf you say type A <: B
A
subtypes B
hence inheriting all its fields and methods.
If the type hierarchy is open type A :> B
rewrites the type hierarchy so that A
becomes the parent type of B
.
This is extremely fairly fine-grained control, and not something we should expose to the user directly
Many languages do not expose this. But this flexibility will make the type system more powerful. The language also would be nearly flexible as a dynamic language but fully type checked.
The syntax I used is for illustration purposes only. You can adopt it as needed as long as the concept is fully supported.
@
is used to denote of the name is significant hence used for type checking. Say we havetype A; def a: @A
a
type checks as nominative.
Alright. I don't think that explicitly stating whether names are structural or nominal is the best solution, for reasons I'll explain in a later section.
{}
record - order of fields not significant()
tuple - order of fields significant
Yes, that's a given. :P
Many languages do not expose this. But this flexibility will make the type system more powerful. The language also would be nearly flexible as a dynamic language but fully type checked.
I'd argue that many languages do not expose this for good reason: this makes it non-obvious as to how types match in all circumstances: you have to manually parse out the definition in your head. Something like if it's a Label
it's nominal, otherwise it's not, is much easier to both explain to newcomers and to reason about in general.
Such a system would also imply that it's possible to have types of the same name but different structure, which while technically making sense, just wouldn't work well in practice.
In structural types one need some labeling to refer and reuse type definitions.
Also for tuple elements it is a convenience to address them by a lable.
I feel it is better to have a lable with the ability to specify if it is significant or not.
In structural types one need some labeling to refer and reuse type definitions.
I think I need to clarify as to how structural typing works in Passerine, and how labels constrain this type system to allow for nominal typing where needed. In short, base product types, such as records and tuples, are structurally typed: records by field names, tuples by index and arity. These types are used in pattern matching:
{ age: old_age } = { age: 70 }
If we wanted to embed this struct inside another, with "labeling to refer and reuse type definitions" we'd have to explicitly declare them. Using the latest revision of type definition syntax, this would look something like:
Person = type { age: Natural }
Person
, then, would be a Label that represents { age: Natural }
Note
that Person is no longer directly structurally typed. You'd have to first
match on name, then match on the structure itself.
jeff = Person { age: 70 }
{ age: old_age } = jeff -- this is invalid
Person { age: old_age } = jeff -- this is valid, match on person first
If we have a struct, we can make it nominal by wrapping it in a Label; the type's definition ensures that it is correct upon construction.
Also for tuple elements it is a convenience to address them by a lable.
A tuple whose elements are addressed by label is called a record.
I feel it is better to have a lable with the ability to specify if it is significant or not.
I disagree, for reasons I have laid out in the previous post. The current typing scheme provides a nice dichotomy:
We only have to worry about one or another, there's a clean separation of concerns. This also works well with row polymorphism, as record types whose fields are a superset of another are matched.
I hope this response helped! In the future, @sirinath, please rationally explain the reasoning behind an idea or suggestion, rather than just posting uncontextualized examples or links to semi-related articles. Thanks!
Goals of a Type System
Passerine's type system is a bit fuzzy at the moment, due to it's dynamic nature in terms of implementation. However, Passerine is a functional programming language, and in that vein I'd like it to have an ergonomic albeit flexible type system. Here are the goals we should try to meet:
|
) that are total (verified through algebraic effects) and verified upon construction.What we need to know
For each of these points, we need to decide the following:
TL;DR and how you can help
I'm not going to answer these questions in this first post, rather, I hope to start a discussion on the topic. I'll try to formalize most of the type system tomorrow, including rules for structuring/destructuting, type definitions, algebraic effects, and so on. I'd really like some input on what you think are needed features of the type system.
The goal of the type system is a simple one: make it feel as flexible as a dynamic type system while still providing the guarantees and speed of a static one. Either way, Passerine is strongly typed.
It doesn't matter if you have 10 years of experience with type systems or 10 minutes - This is the first time I've designed a full-on type system, and I'd like to get it right, from both a theoretical and practical point of view - so don't worry, any feedback is appreciated. Please reply to this issue with any thoughts, issues, comments, questions, suggestions, etc. If there are features in other languages you think have merit, bring them up. Thanks!