pre-srfi / static-scheme

11 stars 0 forks source link

Mutability #4

Closed aijony closed 3 years ago

aijony commented 3 years ago

If I am writing typed scheme code: (define t : Int 1)

Can I? (set! t 2) Haskeller: "No", MLer: "Yes"

Regardless, even in untyped scheme I should not be able to (set! t 2.5) as it breaks our type soundness.

Personally I'm a Haskell person, but I think we should lean towards ML for scheme code, but we could have an immutable type (define h : (Pure Int) 1) and (set! h 2) would be illegal in both typed and untyped scheme.

We could even take the same approach with laziness:

(detype f (-> (Lazy Int) Int))
(define (f a) a) 
johnwcowan commented 3 years ago

See #5 for "no type annotations".

Steme has no mutable variables or mutable data structures from its own perspective. It can process a Scheme mutable vector, but cannot change it, and if Scheme changes it, that's an error (in the usual Scheme sense, what C calls "undefined behavior). It can also make use of any of several types of persistent vectors: trees or fectors, and so on throughout the datatypes.

It's possible that we may abandon purity, but it wouldn't be with Scheme mutable variables: it would be for I/O. Of course, we could add mutable boxes (ML refs), but once you have boxes you have mutable everything, because a mutable vector is an immutable vector of mutable boxes. It spreads until the entire system is mutable.

mnieper commented 3 years ago

If we abandoned purity for I/O, we could then also abandon it for everything else because we would lose the strong guarantees (for correctness proofs, compiler optimizations, automatic parallelization, etc.) anyway.

johnwcowan commented 3 years ago

Yes. Abandoning purity is not an option as far as Steme is concerned. If Steme is using a mutable data structure that it shares with Scheme and Scheme mutates it, it is an (undetectable) error. Closing this.

lassik commented 3 years ago

If we abandoned purity for I/O, we could then also abandon it for everything else because we would lose the strong guarantees (for correctness proofs, compiler optimizations, automatic parallelization, etc.) anyway.

This seems like a good rationale to me as well.