Closed mnieper closed 4 years ago
I think it's reasonable to say that let
in Steme simply is let-polymorphic. There is nothing saying that what is a macro in Scheme cannot be primitive syntax in Steme. Indeed, the primitive syntax of RnRS is just one possible basis: if
could be built from primitive cond
, as in Elisp, and lambda
could be built from primitive case-lambda
. (Indeed, in the T language lambda
is a wrapper over a primitive procedure add-method!
.)
We can, of course, make let
an essential core form in Steme. However, when it semantics diverges from that of a lambda
binding, we diverge from the Scheme language. This may be necessary. The examples with if
/cond
or lambda
/case-lambda
are however a bit misleading because they do not involve any non-trivial change in the semantics. (NB: In Unsyntax, lambda
is actually first reduced to case-lambda
so optimization opportunities aren't lost.)
The question is also relevant to imported bindings. For example, can, after expansion, the top-level program together with all imported bindings be understood as a large let(rec) construct, in which all (alpha-renamed) top-level bindings (of the library and the top-level program) are bound? In ordinary Scheme, this is a good way to explain the semantics of linked programs. In Steme, under the assumption of an HM type system, this would mean that some values that are exported from libraries are monomorphic and others are polymorphic.
Alternatively, we could replace HM by System F. F is more expressive and there are no true polymorphic types because every "polymorphic" type gets a universal quantifier to bind the type variable. Type inference for System F isn't automatic though. Internally, Haskell uses even an extension to system F, called FC or FC(X). Whatever surface syntax and type system is used for the source code, after expansion it has to be typeable (and typed) for FC, which is then handled by the compiler. We could go a similar way.
Speaking of polymorphism and HM, Scheme's numerical tower is sub-typed, but sub-typing is not only incompatible with HM but greatly increases the difficulty of the type system and inference as well.
I think type-classes, while not perfectly corresponding to Scheme's numeric tower would be a better solution regardless. Also, I don't believe R7RS-small requires the numeric tower anymore.
I think we should go as far as type-checking is decidable with annotations, HM only ensure type inference. That means we can go to FC or Fω, but no dependent types (let's open another thread if you want to talk about dependent types).
Speaking of polymorphism and HM, Scheme's numerical tower is sub-typed, but sub-typing is not only incompatible with HM but greatly increases the difficulty of the type system and inference as well.
SML has INTEGER
and REAL
signatures covering all the primitive numeric types:
signature INTEGER
structure Int :> INTEGER
where type int = int
structure FixedInt :> INTEGER (* OPTIONAL *)
structure LargeInt :> INTEGER
structure Int<N> :> INTEGER (* OPTIONAL *)
structure Position :> INTEGER
signature REAL
structure Real :> REAL
where type real = real
structure LargeReal :> REAL
structure Real<N> :> REAL (* OPTIONAL *)
https://smlfamily.github.io/Basis/integer.html https://smlfamily.github.io/Basis/real.html
Here integers and reals are disjoint: an integer is not a real.
Here integers and reals are disjoint: an integer is not a real.
As a mathematician this makes me gag, but I see the practical applications.
Why? We have the ring Z
and the ring (field) R
. A typical construction (say, in ZFC) will yield typically disjoint sets, but with a canonical injective homomorphism i: Z -> R
. Only a posteriori, we do not distinguish between the elements in Z
and their images under i
.
In layman's terms, first we build the integers and the reals separately in parallel, and once we're done, we note that every integer we built is similar to one of the reals we built? Then we say that the integers are a subset of the reals, more because it's convenient to use than because it's the simplest explanation of integers and reals?
I only now realized that according to the SML definitions above, the language doesn't really have a numeric tower in the sense of subtypes, even if we count subtyping in the module system. Int, FixedInt, LargeInt, Int
In layman's terms, first we build the integers and the reals separately in parallel, and once we're done, we note that every integer we built is similar to one of the reals we built? Then we say that the integers are a subset of the reals, more because it's convenient to use than because it's the simplest explanation of integers and reals?
You can, of course, construct the reals R
first and then declare Z
as a certain subset, but this is usually not the way it is done because constructing Z
is much easier and actually needed to construct R
(which is the completion of the quotient field of Z
at its archimedean place 😉.)
Simpler to understand is probably the relation between a field, say R
, and the ring of polynomials R[X]
over it. Howsoever you construct or implement polynomials, the constant polynomials will probably have a different representation than the coefficients R
. But it is convenient to view every coefficient as a constant polynomial.
Arg, this is too much fun and I already have too many interests :D Thanks for the excellent explanations.
Closing this as the original question seems to have resolved in favor of making let
(or a roughly equivalent construct like receive
) a core form side-to-side with lambda
.
I think we should use let
as the primitive (it's easy to add macros on top of it) with two forms of bindings: (var value)
and ((var ...) value)
.
Contrary to simply typed lambda calculus, type inference in polymorphic lambda calculus is not decidable. HM type systems solve this problem by allowing a restricted form of polymorphism, namely let polymorphism.
In Scheme,
let
is usually explained bylambda
. What is a good way to formulate let polymorphism when all we have is justlambda
?