orcmid / miser

The Miser Project is practical demonstration of computation-theoretical aspects of software
https://orcmid.github.io/miser/
Apache License 2.0
3 stars 2 forks source link

Minimum number of primitives? #21

Closed rnd0101 closed 3 years ago

rnd0101 commented 4 years ago

After a while, I revisited miser, trying to recall what I found so interesting about it. The main theme of my thought was however why oMiser defines so many rules? Can't the number be somehow made smaller?

And then I turned to the same question for Lisp, and found this summary: https://news.ycombinator.com/item?id=8714988

It is claimed there that Lisp has 10 primitives:

atom 
quote 
eq
car 
cdr 
cons 
cond 
lambda 
label 
apply

How many oMiser has? I see nine individuals and lindies. I can even roughly imagine correspondence like this:

atom -- ?
quote -- ob.e
eq -- inside ob.d?
car -- ob.a
cdr -- ob.b
cons -- ob.c
cond -- ob.d
lambda -- ?
label -- ?
apply -- ob.ap

Lisp has more primitives for definitions, so oMiser does not look verbose any more. The eq seems to be hidden inside the ob.d. This comparison looks very surprising: I doubt my analysis is correct. I still feel like both are too verbose (after all, it's possible to use just one combinator to produce any program so 2 primitives seems enough.).

The question is: Is there some redundancy in oMiser computational model? Can the number of primitives be less without sacrificing anything by syntax?

orcmid commented 4 years ago

The question is: Is there some redundancy in oMiser computational model? Can the number of primitives be less without sacrificing anything by syntax?

Roman!! Wonderful to hear from you.

  1. The counterpart of atom is the individual. So the primitive obs and the lindies are all atoms.

  2. In LISP, quote is a syntactic marker, not a primitive operation. cond, lambda, and label are also of a syntactic nature. That is, they involve what were later called "special forms."

  3. ob.e is not quote. However, ob.e(x) has the effect of quoting x in the sense that ob.eval(ob.e(x)) = ap(ob.e(x), y) = x.

  4. There are two levels to this. There are the primitive functions of the mathematical structure ‹ob› = 〈Ob,Of,Ot〉

This is expressed as an applied logical theory of FOL=, so "x = y" is the equality proposition, etc.

This foundation level is with a number of constant individuals (including user chosen ones), and the functions ob.a, ob.b, ob.c, ob.e, obap.ap, and obap.eval. "=" is part of the use of FOL=.

In obs used as scripts to obap.ap(p,x) and obap.eval(e) (and ev(p,x,e)), there are lindies and the special (primitive) constant obs ob.NIL, obap.A, obap.B, obab.C, obap.D, obap.E, obap.SELF, obap.ARG, and obap.EV

  1. The computational model is represented by two additional functions, obap.ap(p, x) and obap.eval(e), where p, x, and e are obs and any computationally-determined results are obs. The two flavors are mutually dependent on a supporting function, ev(p, x, e) and d(x,y) which returns an individual depending on whether x = y.

It is the obs that have significance in representation of scripts (the p and x) that are named as primitive in the file obaptheory.txt. These are more like syntactic primitives (appearing in scripts) rather than primitive/axiomatic functions as in the ‹ob› structure.

Examples

  Primitive p       obap.ap(p, x)

 ob.NIL              x
obap.A              ob.a(x)
obap.B              ob.b(x)
obap.E              ob.e(x)
a lindy              p :: x or p :: ob.e(x) depending on x

where application of a lindy leads to a kind of error result.

obap.C and obap.D are special forms (in rules for obap.eval and ev(p,x,e)). They give rise to special "curried" forms when applied singly:

obap.C              obap.C :: ob.e(x) :: obap.ARG
obap.D              obap.D :: ob.e(x) :: obap.ARG

obap.SELF and obap.ARG are obs that have special significance in ev(p,x,e).

ev(p, x, obap.SELF) = p
ev(p, x, obap.ARG) = x

They also yield a kind of error result when applied to an operand directly:

obap.ap(obap.SELF, x) =  ob.e(obap.SELF) :: ob.e(x)
obap.ap(obap.ARG,x) = ob.e(obap.ARG) :: ob.e(x)

And so on for composed obs used for p, x, and e.

orcmid commented 4 years ago

How many oMiser has? I see nine individuals and lindies. I can even roughly imagine correspondence like this: atom -- ? quote -- ob.e eq -- inside ob.d? car -- ob.a cdr -- ob.b cons -- ob.c cond -- ob.d lambda -- ? label -- ? apply -- ob.ap

There is no label required in the oMiser computational model because there are no variables in the sense that there are in LISP.

With regard to lambda, it happens that lambda abstraction is definable in oMiser.

It starts with representing combinators in oMiser. This is worked out in the file combinators.txt.

There are established schemes for transforming lambda expressions to combinator expressions and the same idea will work in the definition of a function, implemented by a script, such that lambda(v, e) produces the script with v (a lindy) abstracted so that obap.ap(lambda(v,e), x) produces the same result that obap.eval(e) does with ob.e(x) substituted everywhere that the abstracted v appears. In simple terms, v is replaces by obap.ARG in e, although there is more to it than that. You can see how this works in the hand-compiled combinators from their lambda definitions in combinators.txt.

There is also a definable function rec(f,e) that is like lambda and serves the purpose that label does in LISP, in that obap(rec(f,e), x) will have lindy f abstracted to be recursive references to the expression applied to x. In effect, the occurences of lindy f in e are replaced by obap.SELF and that is relatively direct. Representation of the Y-combinator illustrates this case too.

In this respect, the LISP lambda and label are definable in the oMiser computational model.

orcmid commented 4 years ago

Concerning the number of primitives, I am a bit baffled by what are and are not considered primitives in LISP. I think they are being undercounted.

With regard to Symbolic Expressions (SEXPRs) in LISP versus (canonical) obs as expressions in oMiser, I note two differences:

  1. In Lisp, the form (f x1 x2 .. xn) is a list structure that signifies a function (procedure) applied to the values of the listed argument expressions x1 x2 ... xn. The corresponding oMiser notation would be more like f[x1, x2, ..., xn] where the operand of f is a list.
  2. In oMiser, the form eval of (p) :: (x) is the application of the value of procedure-script expression p to the value of the operand expression x. In oFrugal input, this is simplifies to (p) x but the semantics is the same.

oMiser is purely applicative in that applied procedures are supplied single operands. So the oFrugal expression f(x1, x2, ..., xn) is syntactic sugar for ( ... (( (f x1) x2) ... xn)

Although lists are not the fundamental structure, the way they are for SEXPRs, they can be used, and a parameter list could be conveyed as an idiomatic list [x1, x2, ..., xn]. The correspondence with a LISP list structure is that [x1, x2, ..., xn] is sugar for (x1) :: (x2) :: ... :: (xn) :: ob.NIL

I think the basic predicates that distinguish atoms, and the relational operator EQ need to be treated as primitives in LISP.

QUOTE is also peculiar in LISP. (QUOTE e1 e2 ... 3n) makes the entire SEXPR a constant as far as evaluation is concerned. ob.e(x) or obap.eval(obap.E :: xe) takes the result of evaluating xe (to x) and then creates an enclosure with x as the content. In effect, the resulting value of an expression is quoted. It is easy to make these with constants, but it is not via creating a special syntactic form.

Sorry for the rambling. Your questions brought all of these considerations to mind.

rnd0101 commented 4 years ago

Nice to hear from you, Dennis! Thank you for extended answer! And the answer above is exactly what I wanted. I think it's very important to contrast oMiser/oFrugal with LISP, because superficially they are similar.

I even think the above is quite good quality to be formatted into a document. I am not a LISP expert to go deeper into the topic unfortunately, but the above provided some insights.

If you remember, my Eternal project requires some simple, but easily "bootstrappable" programming language, so I am reconsidering my choices. The list is still the same: oMiser, LISP, Forth. However, I am not quite satisfied with any.

Let me check that I understood your answer right. What you say is that oMiser is actually into combinators rather than "lists". Last year you have added a lot on combinators, and they look so natural, that I can't think of a better mainstream CS concept to associate oMiser with.

The reason I am not quite satisfied with oMiser "engine" for Eternal is that my intuition says while the foundations do work, I feel like a little bit simpler engine can do the same in a more elegant way. There are too many rules. For example, rules for ob.c are quite involved (this is from my oMiser shell implementation):

    def ev(self, p, x):
        if self.a == C and is_pair(self.b):
            return c(ev(p, x, self.b.a), ev(p, x, self.b.b))
        if self.a == D and is_pair(self.b):
            if ev(p, x, self.b.a) == ev(p, x, self.b.b):
                return A
            else:
                return B
        if self.a == EV:
            return ev(p, x, ev(p, x, self.b))
        return ap(ev(p, x, self.a), ev(p, x, self.b))

A bit too much magic going on. If I have time, I will try to imaging how to make it simpler.

Also "bootstrappability" concerns me. For example, lambda calculus as implemented in functional languages goes amazingly fast from the mathematical construct to something one can program in the problem domain. (just compare with a Turing machine!). Of course, there were a lot of people refining the FP concepts and you have done it all so far mostly alone! Forth and LISP both do have good enough "bootstrappability". LISP even for concurrent use cases and I/O (see for example this article).

A philosophical sidenote. Last year I received a great leap in understanding programming languages by reading Robert Brandom's "Between Saying & Doing". One of the insights for me from there is the nature and relationships of "metalanguages" discovered in "meaning-use analysis". (I believe you touch that in Miser project with Bratman diagrams, and everywhere one language is described in another.) Brandom depicts a framework to contemplate on that explicitly. I think this has direct and quite practical relevance to "bootstrappability" of any language. As well as getting to the "PV-necessary" (term from Brandom) core.

My turn to apologize for ramblings...

orcmid commented 4 years ago

Wow! Great rambling, Roman!

I have been musing about the Lisp : oMiser relationship, thanks to your questions. I think I could write it out better now.

Concerning your Python implementation, I think part of the problem is the choice of constructors, and a little about inheritance (for different flavors of individual). The SML approach is better in this respect, I think. Let's keep that separate though.

There are some limitations of oMiser that will confound what you want to do, until some meaningful extension mechanism is available.

  1. Obs are immutable. There is no assignment, etc.
  2. The computational model is imperative and greedy. No direct evidence of laziness, No visible concurrence. Any acceleration in an implementation has to be such that such semantics be preserved.
  3. There are only obs and well-defined functions on obs to obs. (No input-output, no random, no side-effects of any kind.)

One objective I have for this is to deal with types at some level, and then interactive behaviors. Those will be in extended systems and until oMiser and oFrugal are complete (in being solid version 1.0) I am avoiding going too far.

The thing about this 1.0 oMiser/oFrugal level is that it is sufficient to demonstrate Church-Turing computational completeness, mainly by demonstrating simulations of the already-accepted CT complete schemes: Lambda Calculus (via combinators), Universal Turing machines (via a scheme for defining arbitrary finite-state, single-tape simulations), and Recursive Function Theory (via achievement of arithmetic and function representations).

This is the tie-in with computation theory, with operational demonstrations.

It is not enough to establish a general-purpose nature though. One at least needs to be able to handle strings of characters as a data type, and arithmetic types are important. And input-output and interaction have to be dealt with at some point. A sub-project there has to do with type inference (related to combinators) and ways to interplay with different implementations of the same abstract entities. I think there has to be some aspect of recording intention of a representation. That's all still fuzzy. It is somewhat related to type systems and I am leaving that for further investigation.

Thank you for the reference to Robert Brandom. I am bemused that some philosophers of my acquaintance (but neither Putnum nor Quine) have full beards. I have obtained "Between Saying and Doing" at your recommendation. The inferential view might be more clear in earlier works, but they are all too expensive to consider at the moment.

I have some sources on different angles, but I suspect in terms of computation, there is reasonable harmony.

rnd0101 commented 4 years ago

Quick comment on proving oMiser Turing-completeness. Do you really need to do it in three different ways? One approach could be as demonstrated here for another purpose to rigorously embed SK combinator calculus. Then the other results will follow. (I actually thought you have already done something like that).

As for oMiser shortcomings you listed, my angle is practical: An innovation to be successful needs to be understood and have certain ergonomics properties unless backed by a big corporation. Today's computing framework demand concurrency and interaction, not just the usual properties taken for granted like computational (expressive) power. Thus, for having people to try it hands on, I imagine oFrugal/oMiser in Jupyter notebook, where people can easily try it out. Then some theoretical CS major may get excited and take the torch forward. Easier to say than to do.

Same for establishing certain properties: Coq / Agda / metamath / ... automatically verifiable proofs becomes the way in addition to demonstrations. (I've tried to add axioms for Coq, but I am not an expert in making it into proofs).

Thanks for feedback on the Python shell! Please, also note it has primitive "solve" functionality to find (in a brute force way) algorithms for specific transformations. Actually, I think FP demonstration while easier to achieve may give less insights than trying to do it in a system with less "affinity". I do not remember, someone told once why object-oriented software X was not written in C++, but in C. It has something to do "host" system of the same paradigm may harm the development of the target system.

One idea I have regarding interactive / concurrent oMiser is to extend lindies' behavior. It just came to my mind: Lindies (or lindies-like entities) may be the gate to "an universe" with alternative computational behavior. That is, they may bring concurrency mechanism (eg, be promises), communication channels, kind of "oracle tapes", etc: In other words provide everything needed for more real world software frameworks. Then it will be possible to apply some concurrency regime on top (actors model? CSP? you name it). In short: Obs need legitimate way to refer to and communicate with external entities. Immutability is not a big problem: functional programming learned to live with it already.

As for laziness, my wild guess is there may be some suitable combinator for co-recursion (maybe something good here?) as we need to deal with infinite data structures easily. Maybe, notion of continuations can be somehow introduced to oMiser?

I am less sure about introducing types as I bet your understanding of type is more like that of SML (at least why settle for something inferior?), but I think "extended lindy" with another set of oMiser native code backing it may help.

rnd0101 commented 4 years ago

Actually, another feature of Lisp shared by oMiser is homoiconicity. I believe this is what brings the most of the power and may explain some superficial similarities.

orcmid commented 4 years ago

Actually, another feature of Lisp shared by oMiser is homoiconicity. I believe this is what brings the most of the power and may explain some superficial similarities.

Wonderful!! I had to look that up. https://en.wikipedia.org/wiki/Homoiconicity

Yes, the chosen oMiser computational model shares that property with LISP. It is essentially the stored-program principle and what some have termed "unstratified control." I agree, that is where the power is, even restricted to Obs as the rudimentary data structure.

One can simulate an elementary "pure" LISP in oMiser (corresponding singletons to LISP atoms), and making the obvious connection of ob.a, ob.b, and ob.c with CAR, CDR, and CONS. ob.e becomes used internal to the simulation but not exposed. It has all the problems that LISP has. LISP also has arithmetic, but one can translate numerals to enclosures.

orcmid commented 4 years ago

Quick comment on proving oMiser Turing-completeness. Do you really need to do it in three different ways? One approach could be as demonstrated here for another purpose to rigorously embed SK combinator calculus. Then the other results will follow. (I actually thought you have already done something like that).

There are some theoretical problems with embedding SK (or SKI) combinator calculus. I know that has been done in some systems, and I did not want to take that step. The problem is that applying combinators is supposed to produce combinators. One wants a quasi-combinatorial scheme that admits other kinds of data. Also, one has to deal with equality not being available for combinators. (I am baffled that people don't appreciate that the Church-Rosser theorem is a negative result.)

Dana Scott managed such a scheme and it is behind the simple type system of SML. I was led to the relevant papers only in the last couple of years as I resumed the Miser Project on GitHub.

My construction, exhibited in the file combinators.txt, is to demonstrate oMiser scripts that represent S, K, and I, and from there the other common combinators, including Y. In this construction, type preservation is identified as an emergent property rather than a pre-condition. Also, the Scott introduction of a Boolean type is treated properly for oMiser, I believe.

An interesting problem, also relevant to the introduction of lambda(x) and rec(f) as definable, is the ability to reverse-compile back to combinatorial forms of that nature. Although every ob, p, has an interpretation as a script in both obap.ap(p,x) and obap.eval(p), there might have to be restrictions on those p for which lambda(x) p and rec(f) p satisfy the formal requirements on ((lambda(x) p) y and (rec(f) p) x. I suspect the condition will be related to pragmatic conditions on having successful decompilation. Such reverse-engineering is an important matter when schemes for program transformation are addressed.

As you suggest, demonstration of combinator representations, and having scripts that accomplish lambda and rec abstraction is the most useful case, and it might be sufficient.

I think the demonstration of Turing computability is important because it is the foundation of complexity theory and sort of completes the Church-Turing harmony. Recursion theory might be unnecessary to address. I don't want to rule it out at this point. A big issue has to do with number theory and demonstrating that oMiser is good enough for that, at least theoretically. How to make that the case as a practical matter then becomes an useful question too.

PS: At the time of the first ever AAAI conference at Stanford, there was also a first-ever LISP Conference (to become the ICFP conference as functional-programming took on life). I think I met David Turner at that time and I did not appreciate the idea of adding primitive combinators to hardware. I am still not thrilled about it. I saw it as a kind of question begging and also a misunderstanding of theoretical difficulties. I might revise that opinion, but not yet.

orcmid commented 4 years ago

PPS: There are three other matters that might be worthy of exploration at various points.

  1. Gregory Chaitin's Algorithmic Information Theory offers an insight into the limits of Formal Reasoning using a restricted form of LISP for a demonstration. I would expect the demonstration can be represented in oFrugal and the claim scrutinized in that respect.

  2. A Computational Logic (now implemented in the ACL2 dialect of LISP) also demonstrates ways of reasoning and deducing things about LISP programs themselves. Similar to the AIT exercise, it would be interesting to see how there might be a Miser Computational Logic. I am keen about determining what has to be fixed (made rigid/reserved) in Miser scripts for that to be available.

  3. There are proposals and treatments of higher algorithmic systems. My attention has been drawn to the work by Yuri Gurevich on computation at arbitrary levels of abstraction. I am unclear whether that is relevant to Miser and prospective extensions. There is much material though, and it would be interesting to determine whether there is a way to lift Miser into higher-levels of abstraction in a disciplined manner.

PPPS: I am very grateful for your return to discussions about the Miser Project. I respect that this might not serve your Eternal project, especially because I am intending to come around to extensions with components from other sources in a pretty roundabout and (severely) restricted way. It is too soon for me to be any clearer than that and I may be completely wrong-headed about it. It may be a while before the various considerations can be demonstrated.

rnd0101 commented 4 years ago

While I need to refresh my memory on some concept you mentioned above, your comment on hardware combinators made me think "why?" So I've googled up a book - "An Architecture for Combinator Graph Reduction" by Philip John Koopman, which (if I understood the summary correctly - I do not have a copy) touches the topic of special-purpose hardware for functional languages. I am not yet sure, whether it's relevant for your or my project. Is it a coincidence: While writing the comment I was thinking about that possibility, that is, whether there is a better architecture for functional programming, and if so whether I can get use of some very simple underlying machine for Eternal (if you remember, my concern there was a super-compact / conceptually simple engine a child can code into machine, which can be used to achieve richer and richer computation environments by just code-as-data). And my bet so far is on functional programming technique, which possesses homoiconicity, and is absolutele opposite to a Turing tar-pit (well, I am ok with the very core being such, but then next layers of the "onion" can bring comfortable programming in higher and higher levels). Thus I am contemplating on how to make some proof-of-concept with oMiser in that respect.

Ironically, I absent-mindly googled for "ideal programming language" and the first hit was some kind of "tree processing programming language" description. That is it: Lisp is about lists as the base concept (similar to APL being about arrays), and I see tree-processing core in oMiser. And trees resonate well with both compilers (grammars and ASTs), many real-world programming tasks (all that JSON, graphs, even HTML, etc), many CS and type theoretical constructs are about trees (simpler concepts just do not cut it without superstructures). This is all on intuition level. Of course, computation can be defined using any construct and transformed to another, but properly chosen core can give a noticeable head-start. Unfortunately, my interest in these sideprojects is not something constant. Sometimes I am in total despair on the implementability. This surge in interest has been caused by me adjusting my "applied" philosophy on the inferentialism-denotationalism scale, which makes me rethink the meaning of the whole Eternal endeavor on one side, but then gives new hopes because I was in the deadend with purely "denotational" "reasoning", which seemed to me as a good approximation at first...

Hopefully, I can help oMiser / oFrugal in the same process. It's totally understandable Miser has it's roadmap (sorry if my comments sound like complains - I am just trying being open with my evaluation from very different angles). And research would not be research if a researcher is always heading to the right direction.

orcmid commented 4 years ago

That's a very nice account. Thank you for the analysis.

Eternal (if you remember, my concern there was a super-compact / conceptually simple engine a child can code into machine, which can be used to achieve richer and richer computation environments by just code-as-data). And my bet so far is on functional programming technique, which possesses homoiconicity, and is absolutele opposite to a Turing tar-pit (well, I am ok with the very core being such, but then next layers of the "onion" can bring comfortable programming in higher and higher levels). Thus I am contemplating on how to make some proof-of-concept with oMiser in that respect.

I am very sympathetic to that view. oMiser is down at the low level, of course, and it is not clear at this point how one elevates language and also deals with the rather poor performance one can expect if everything comes down to the ob level and its applicative approach.

Ironically, I absent-mindly googled for "ideal programming language" and the first hit was some kind of "tree processing programming language" description. That is it: Lisp is about lists as the base concept (similar to APL being about arrays), and I see tree-processing core in oMiser. And trees resonate well with both compilers (grammars and ASTs), many real-world programming tasks (all that JSON, graphs, even HTML, etc), many CS and type theoretical constructs are about trees (simpler concepts just do not cut it without superstructures).

That trees (with even LISPs data structure actually being tree-based) are appealing seems to have two important features. First, the arrangements are amenable to mathematical treatment, at least with immutabality. Secondly, there is no visible storage-management to be concerned with. Garbage-collection of some form is called for, but it is in the mechanism and not something the user of oFrugal say, need to be too concerned with. Tail recursion is a more important economization technique :).

Arrays have random access and that matters in many situations. In oMIser, as in LISP, dictionaries are rather automatically appealing. The appeal of arrays, beside the random accessing, has to do with small updating and being able to do it in place. There are purely-functional approaches that approach that performance. This is probably the most worrisome to get right in raising the level of abstraction. I don't know if something APL-ish is viable. Backus FP was rather array-oriented and that is worthy of examination although I am not delighted with FP, even though oFrugal has a certain resemblance concerning a system layer above the functional one.

Hopefully, I can help oMiser / oFrugal in the same process. It's totally understandable Miser has it's roadmap (sorry if my comments sound like complains - I am just trying being open with my evaluation from very different angles). And research would not be research if a researcher is always heading to the right direction.

Agreed. I trust these conversations will be valuable to both of us, and others, as we move ahead in areas of shared interest.

orcmid commented 4 years ago

Arrays have random access and that matters in many situations. In oMIser, as in LISP, dictionaries are rather automatically appealing. The appeal of arrays, beside the random accessing, has to do with small updating and being able to do it in place. There are purely-functional approaches that approach that performance. This is probably the most worrisome to get right in raising the level of abstraction. I don't know if something APL-ish is viable. Backus FP was rather array-oriented and that is worthy of examination although I am not delighted with FP, even though oFrugal has a certain resemblance concerning a system layer above the functional one.

Two things.

  1. I find the Chris Okasaki book, "Purely Functional Data Structures" to be very useful in regard to techniques for improving the performance with certain abstractions. The original thesis is available on-line, but the book is a more refined version. Most of these can be implemented in oMiser and it would be even better if they were implemented "behind the scenes" in a more accelerated form.

  2. The idea of behind-the-scenes acceleration raises a particular problem and that has to do with comparability of obs. Until it is somehow no longer possible or feasible, I am pursuing the use of extensional equality. This means that if there is some way of introducing an accelerated procedure or data structure as some transformation of an ob, say to make an individual of some type, that individual remains comparable with obs of the same type based on comparison of the un-transformed obs.

There might be other ways to consider cases where the individuals are different, extensionally, but equivalent in some other (intended) sense. That requires some relationship other than ob equality. This is not a new situation. I propose to address it systematically as well as I can.

Onward ...

rnd0101 commented 4 years ago

Yes, certainly LISP is ok with trees. And what a coincidence: I took Okasaki from the shelf yesterday to consult what can be done to immutability.

My plan is to explore an "imperative" implementation of oMiser low-level (I want to learn Rust at the same time), because FP implementation is "easy", but in my opinion does not reveal hard parts, and does not add new intuitions. I am not yet sure whether to implement the syntax - will see in the process, but I want to try the core and maybe some alternative superstructures. If motivation will be strong enough at that point - I really want to see how much I can accelerate the "solve" functionality (for equations). That is "saying" part, I will keep my "doing" part in github.

orcmid commented 4 years ago

Yes, certainly LISP is ok with trees. And what a coincidence: I took Okasaki from the shelf yesterday to consult what can be done to immutability.

;)

My plan is to explore an "imperative" implementation of oMiser low-level (I want to learn Rust at the same time), because FP implementation is "easy", but in my opinion does not reveal hard parts, and does not add new intuitions. I am not yet sure whether to implement the syntax

Yes, it is the case that implementation in SML is very easy, and rather direct from the mathematical expression. I am inclined to make a C/C++ COM-interfaced library for use with different languages/platforms and for low-level proof-of-concept. I was thinking about Rust also, although I am wary because Rust is not very stable at the moment: The approach to specification (or lack thereof) is concerning.

With regard to low-level, one way to avoid garbage collection is to use reference counting (which comes with COM anyhow). The immutability and lack of cycles makes that easy. I am thinking of that as an easier way to get a lower-level proof of concept. Also, one can do accelerators and some garbage collection behind the COM interfaces as a form of performance improvement later.

The oMiser library will not have a shell. It is all runtime engine.

The shell is oFrugal's job. The syntax that I specify is essentially for an ob calculator expession language. It basically takes an expression and evaluates it to a canonical ob that is then presented. That's a basic REPL demonstration. It also provides for storage of results and their use in subsequent calculations, which is how we can reuse source code and also get around the absence of assignment. It turns out that linear assignment/shadowing is easy to model and the shell language remains context-free with a proviso about linear referential integrity.

Apart from an SML version, I think one way to do the shell is as a Microsoft Terminal application. That's a little beyond using a simple console application that ports to Windows, Mac, and Linux. (I expect that Microsoft Terminal will be ported, just as Visual Studio Code is now.)

There is a diagram of the oMiser/oFrugal separation at https://www.facebook.com/MiserProject/photos/a.1938469166367657/1978237959057444/?type=3&theater. I am going to add that to other documentation and explanations.

There is also a series of blog posts that narrate the developments up to but not including the computational model. They will continue to be developed at https://orcmid.blogspot.com/search/label/Miser%20Project. I have been stalling while I ponder how to move into the description of that part that is perhaps more sensible to computer programmers.

Finally, I have also established https://orcmid.github.io/miser/, although I am still fussing with formatting and templates before I start putting documentation there.

After some other projects are handled (this being tax time in the US), I will roll up my sleeves and get back iinto all of this.

rnd0101 commented 4 years ago

Thanks for the https://orcmid.blogspot.com/search/label/Miser%20Project link! Looking forward to know if you will like the Brandom's book: Your style of the blog on Miser Project actually brings very strong associations with the meaning-use analysis. Or maybe I just started to see it in texts, which deal with languages, metalanguages, and are plugged into philosophical realm. Brandom deals with modal logic mostly as example, but I wonder if similar reasoning can be applied to the Miser's concepts and logic. From the article I guess you already have that meaning-use intuition (even though you do not state that or "label" relations), so most intriguing for me whether you find anything new in "Between Saying & Doing"...

orcmid commented 4 years ago

I haven't started Brandom's book yet. I will not ignore it.

STOP PRESS. I found something that may deal well with addition of things like assignment, arrays, and other imperative cases as a form of "Computation Expression." The term is used in F# which is an outgrowth of OCAML which is based on [S]ML.

The syntax is making my eyes-crossed and I need to back out and get a better understanding of some of the other primitives of F#/OCAML/SML and understand better what the actual operations are versus how it is sugared to look like imperative stuff.

I believe part of it is to have a kind of statement sequence, {s1; s2; ...; sn}, something LISP did also, but it is reconciled in a purely-functional context. What happens is a builder is applied to the code of the sequence in a way where state-/environment-passing does not have to be handled directly because the statement sequence is actually sugar for a form where the statement sequence is evaluated internal to the builder in a manner that permits a kind of stateful continuation activity.

Don Syme talks about how such builder sequence-of-operations expressions are desugared to actually be carried out in a functional manner. I am not immersed in F# so I don't follow this very well. But I think this can be re-expressed in terms of lambda expressions and application of certain builder functions that cause evaluation to involve state while still having purely functional expression pieces of the "computation expression."

I might not be making any sense here. It will take more work. There also needs to be a better term than "computation expression."

I just wanted to put a flag on the play :).

rnd0101 commented 4 years ago

OCaml will have something like that soon: https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-syntax.html

Quote: "Compared to similar extensions for languages like Haskell, F# and Scala, it’s interesting to note that the OCaml version is not only targeting monads but also supports a version for applicative functors."