keean / zenscript

A trait based language that compiles to JavaScript
MIT License
42 stars 7 forks source link

Chemlambda, graph theory etc #48

Open NodixBlockchain opened 3 years ago

NodixBlockchain commented 3 years ago

Hello Guy, ive been doing lot of expetimentation lately on parsers, grammar and graph theory, and started to dig the relationship between graph theory and computation theory.

Deep down, all those algorithm of parser combinators, ll/lr parser etc seem to always come down to a tree representation, and i started to dig the relationship between graph theory and computation model.

Just yesterday i found this article

 https://blog.reverberate.org/2013/07/ll-and-lr-parsing-demystified.html

There are three ways of traversing a binary tree, as explained on Wikipedia: in-order, pre-order, and post-order. They differ in whether you visit the parent node before (pre-order), after (post-order), or in between (in-order) the children of that parent. These three correspond exactly to infix, Polish, and Reverse Polish notation:

1 + 2 * 3 // Infix expression; in-order traversal.

  • 1 2 3 // Polish (prefix) expression; pre-order traversal. 1 2 3 + // Reverse Polish (postfix) expression; post-order traversal.

So Polish and Reverse Polish notation fully encode a tree structure and the steps you would take to traverse it. The main difference between these encodings and an actual parse tree is that Polish and Reverse Polish encodings are not random access. With a real tree you can choose to follow an interior node to its right subtree, its left subtree, or even (for many trees) its parent. With these linear encodings there is no such flexibility: you have to follow the traversal as it is already encoded

Thats an example of how tree theory and the way to process a tree has also implication into computation model.

While digging this,i found this this thing called chemlambda, who seems To be a model of computation based on graph theory, with graph rewrite operations that allow to modify the graph, which allow a way of formal self modifying code.

 Apparently chemlambda is inspired from this model

 http://tuvalu.santafe.edu/~walter/AlChemy/Statement/organization.html

Which i find very inspiring, and i found it has implication in both complexity theory, and distributed systems.

Its extremely interesting To me, especially this part

A central theme of this project is our emphasis on the action of "agents" as consisting in a mapping from "agents" to "agents". The foundational distinction between this project and others sharing similar emphasis is that none of the others have the feature of being interpretable as mathematical structures. To bootstrap a theory of biological organization, requires a formalization of chemistry that is useful for biology. Ultimately this means defining "interaction" and "structure of an entity" mutually. In our case "structure of an entity" will specifically come to mean a formula, expression, or term within a syntactical system resulting from a theory about the possible interactions characterizing that class of entities. To be illuminating, a linkage between structure and interaction must connect to a mathematical theory.

Stable relational structures are what conventional dynamical systems take as a given. If we aim at a description of their genesis, we must extend our description of the world to include a sound and sufficiently general dynamics of relationships capable of generating stable relational structures as stationary outcomes. What enables any such dynamics from within, we submit, is at the very minimum a notion of entities that lead this subtle double-life: entities that simultaneously act as relationships between entities (by being interpretable as functions on them).

Which remove the barrier between "code" and "data" in sort that you can have functions that take a function as an input and output another function, which allow for this kind of algebra needed for self organising structures. It seem to be a solution for parallelism, decentralised distributed system, and self modifying code, which look like an holy grail to me :)

I started to experiment with this and exploring this relationship between graph theory and computation model, but cant seem to find lot of information on this. It seems to be a given that a parser can output parse tree, and that those parse tree can be turned to turing complete system but i cant really find real deep formal theory on this.

I have this hunch that for example sticking to what can be represented as a DAG without backward references prevent loop and infinite recursion.

As i use a blockchain system to store the tree, with nodes referenced by their hash, it naturally prevent cycles. I just added some instruction to have equivalent of map working on lists, and recursive operations on data tree like to compute skeletal animation based on quaternion, and it seems to works well.

With a lr parsing style it even seem to allow very clean handling of asynchronous function, as all the computation that has To be done on the result is on the stack when the asynchronous function is called, so it can easily be added to the handler code and executed asynchronously when the function is done, and all the dependancy graph is explicit.

In theory using this sort of algebra can also allow runtime polymorphism, as if you can have graph operation working on the computation graph, it can allow to generate monomorphised function based on the polymorphic version, while still keeping in the realm of formal system.

In theory it can be done at compile time, load time or runtime, and i think can even elucidate the interface problem as function can be checked for their type with graph operations. And in theory graph rewrite operation like in chemlambda seem to be a good way to represent polymorphism.

And in theory it can accomodate with self modifying code, and all kind of dynamic system modeling, even if it seems to escape common computation rules.

But i cant seem to find lot of formal definition of how tree properties translate to computation model, even if it seem to me computation model can always be mapped to a tree, and its very easy to have a system of formal operations working on graph, which i think can be a way to have runtime polymorphism.

Its a paradigm that got stuck in my mind and it seems i cant really nail this sort of relationship or find formal systems describing this.

shelby3 commented 3 years ago

https://www.discovermagazine.com/the-sciences/new-theory-of-everything-unites-quantum-mechanics-with-relativity-and-much

https://www.wired.com/story/stephen-wolfram-invites-you-to-solve-physics/

https://www.scientificamerican.com/article/physicists-criticize-stephen-wolframs-theory-of-everything/

https://gizmodo.com/the-trouble-with-stephen-wolfram-s-new-fundamental-theo-1842985419

https://www.forbes.com/sites/startswithabang/2020/05/13/3-simple-reasons-why-wolframs-new-fundamental-theory-is-not-yet-science/

https://interestingengineering.com/the-theory-of-everything-by-wolfram-gets-criticized-by-physicists

NodixBlockchain commented 3 years ago

hello :)

I found out as well it has a connection with physics, even if that was not what i was seeking in the first place.

I could talk to a good physicist who told it looked close to prigogine thermodynamics https://en.wikipedia.org/wiki/Ilya_Prigogine

As i'm also quite into edelman and whiteheads it would not suprise me that there are connections in physics. I would just answer this How is Natural Science Possible?: Whitehead's 1st Lecture at Harvard (1924) :)

I couldn't find a direct connection between wolfram model and this one, but it's close in the principle, it's likely one inspired the other or they have same implication with physics.

https://arxiv.org/pdf/1811.04960.pdf

The references and links in the text are left as they were and much improvement could be made concerning references pointing to Interaction Nets [2], for example; Berry and Boudol CHAM [1] is not explicitely mentioned here, but the initial version of this formalism, i.e. ”chemical concrete machine”[13], uses it. The Algorithmic Chemistry of Fontana and Buss [3] [4] appears here via a link to their research, perhaps this link is better. The same happens with the work [5] by Christoph Flamm, for which, in the body of the article, a link to his papers page is provided. Part of this was due to the ignorance of the author at the moment (a mathematician). A later hope was to keep the article as is and to link it with critical public reviews, as a weaker form of validation than, for example, an alternative Haskell implementation of chemlambda [5]. However, the main idea of the article is original, as are the self-imposed constraints, namely to restrict to only local rewrites on graphs, regardless their interpretation as lambda terms (or lack of such interpretation), restrict to only two algorithms of reduction (deterministic or random), the most simple possible.

This sort of model is more relevant for dealing with living organism than with inanimate objects like usual physics. There is already some hints of this in the way neural network works.

But it was not my original intent to get into physics.

Found this article that make a short summary of different way to achive polymorphism :

https://dev.to/jmcclell/inheritance-vs-generics-vs-typeclasses-in-scala-20op

Subtype Polymorphism (Inheritance) subtyping allows a function to be written to take an object of a certain type T, but also work correctly, if passed an object that belongs to a type S that is a subtype of T (according to the Liskov substitution principle) [2]

Parametric Polymorphism (Generics) Parametric polymorphism allows a function or a data type to be written generically, so that it can handle values uniformly without depending on their type [3]

Ad-hoc Polymorphism (Type Classes) the term ad hoc polymorphism [refers] to polymorphic functions that can be applied to arguments of different types, but that behave differently depending on the type of the argument to which they are applied [4]

I will skip the first one based on subtyping because it still depend on static code, and some invariant among a category of different datatypes.

The 2 others are more relevant to where i want to get at, which is that you have a computation defined in a form that allow it to be combined with a type to yeld a version of the computation unique to that type.

So they both encode this notion of 'variable functions', but if i want to make a comparison on classic lambda function level, it's like if instead of computing the result from a value, you would pick an output value in a set of precomputed values based on the input. Much like you would use a table to compute a sinus if the cpu doesn't have sinus. The idea of dispatch table with a switch case is similiar to this principle. You have a set of predefined functions that are selected based on the input type. Whereas some computation rules applied to a graph could achieve the same thing. And it can also deal very well with asynchronous / simultaneous operations (even all of them are not executed in parrallel on the hardware).

My goal for the moment is not to have the whole range of operation that you would have in a real molecular computer, or to get to a physic model.

More to have some computation that can modify graph to alter the computation being done that can still fit into a formal model.

But it seems to me even having a graph represention is not enough to encode the computation because you also need to know the tree parsing method.

I already have this sort of things like to compute an HTML page based on a computation tree, need to compute leaf first, and embed the result into the parent node. While computing bones, you need to compute the branch in order and apply the result to the child branches. I'm pretty sure there must be a formal theory about that like explained in the article about ll/lr parsing, but can't put my finger on it.

NodixBlockchain commented 3 years ago

For those interested, i found the abstraction i was looking for in stepanov book 'elements of programming', he calls this 'coordinated structure' and gets into all the points i'm searching, in case someone end up with the same questions :)

I'm still digesting the two chapters but at least i feel less in uncharted territory :)

A concept is a coordinate structure if it consists of one or more coordinate types, zero or more value types, one or more traversal functions, and zero or more access functions. Each traversal function maps one or more coordinate types and/or value types into a coordinate type, whereas each access function maps one or more coordinate types and/or value types into a value type. For example, when considered as a coordinate structure, a readable indexed iterator has one value type and two coordinate types: the iterator type and its distance type. The traversal functions are + (adding a distance to an iterator) and — (giving the distance between two iterators). There is one access function: source.

Fun thing when i look for coordinated structure in google i find mostly result from chemistry =)

keean commented 3 years ago

@NodixBlockchain I think Stepanov's "Elements of Programming" is one of the best books on Generics I have read. I recommend reading it from start to finish several times. It is not an easy book, but once you get the idea of what is going on, the early chapters make a lot more sense.

I also recommend reading "From Mathematics to Generic Programming" by Stepanov & Rose as it presents the same material in a different way, which can help with understanding, plus some really interesting history of mathematics and programming.

NodixBlockchain commented 3 years ago

Yes there are all the algorithm to find isomorphism between "coordinated structures" and all those, couldn't find much reference to this term in the context of programming outside of stepanov book though.

The other book is also on my list, found some video on youtube as well.

keean commented 3 years ago

@NodixBlockchain Try "Data Structues and Network Algorithms" by Robert Endre Tarjan, which is a book on algorithms recommended by Stepanov.

mbuliga commented 3 years ago

I just found this issue, thank you for interest. There is now a new page of experiments and technical notes on chemlambda: https://chemlambda.github.io/index.html which I hope can answer many questions. There is a lot of information which one can play with, spread over several repositories.
For more please let me know, I'd be happy to discuss more, because the project started from a question in geometry and it now returns to it, but there are many directions which I would like to develop, like the "made of money" idea from the repository hapax. or which I would like a real programmer to magically do it, like a graphical horoscope app which would allow chemlambda molecules to live and circulate between many phones :)

NodixBlockchain commented 3 years ago

https://youtu.be/2RMXujk5AhM?t=1543

Gerald Edelman - Brain Dynamics to Consciousness (Full Lecture)

the kind of theory i'm into :)

NodixBlockchain commented 3 years ago

I just found this issue, thank you for interest. There is now a new page of experiments and technical notes on chemlambda: https://chemlambda.github.io/index.html which I hope can answer many questions. There is a lot of information which one can play with, spread over several repositories. For more please let me know, I'd be happy to discuss more, because the project started from a question in geometry and it now returns to it, but there are many directions which I would like to develop, like the "made of money" idea from the repository hapax. or which I would like a real programmer to magically do it, like a graphical horoscope app which would allow chemlambda molecules to live and circulate between many phones :)

originally i found this project looking into relationship between graph and computation model, looked for things like 'graph theory + lambda calculus' and found this project :)

what i would have in mind is really some kind of computation model with self modifying code that would follow a formal computation theory, and i think going through graph theory is really the way to go, because chemlambda show how graph manipulation can lead to a whole computation model, that can be equivalent to lambda calculs, but also add more possibilities :)

In the video edelman mention several time graph theory as well.

i'm still digging the whole thing :)

mbuliga commented 3 years ago

Oh thank you. I'm more into applied biochemistry. Many links are available from the page of chemlambda, but maybe relevant for the situation we are in now is this short story https://archive.is/Ir9GG .

I would search in databases of biochemical reactions for patterns akin to those I use (and everybody else does, in so various fields of math and computation).

There is also the problem that graph rewrite systems (GRS) and term rewrite systems (TRS) are very different beasts. There is no clear way to define an isomorphism between a GRS and a TRS. Indeed, you can turn a term of a TRS into it's abstract syntactic tree, which is a graph. But you can't always turn a term rewrite into a graph rewrite on the tree graph, in such a way as to preserve the locality of the rewrite.

In the opposite direction, it is sometimes possible to define a function which decorates the edges of a graph with terms, such that if the graph is the abstract syntactic tree of a term then the root edge of the graph is decorated with that term. However, if you pass to graph rewrites, even if the graph rewrite is local (like the graphical version of the beta rewrite say), the decoration of the graph changes globally.

So I think that TRS and GRS are not comparable without the introduction of external hypotheses, which may destroy the meaning of the seeked isomorphism.

An implication is that GRS-es with only local rewrites (like in nature) are not only a unnecessary graphical decoration of things which can be explained with TRS only (like lambda calculus), they are intrinsically interesting outside their (undefined properly) isomorphism with TRS-es.

But you'll say that any GRS for which there is a program is equivalent with the TRS (the program). Agree! but the TRS will consist mainly of a generator of randomness and a destroyer of the supplimentary structure (like random shuffles of lists), all run on a machine whose code can't be controlled. On the other side a GRS like real chemistry uses real space and probability distributions which are function of the state, so again is not at all clear in what sense the two systems are really comparable.

If though there is a low hanging fruit, is is that maybe in real chemistry we can identify a GRS which works (not perfectly but) well enough so we can use ideas from functional programming to get real chemical effects. And damned be the search for perfect semantics.

----- Original Message ----- From: "NodixBlockchain" notifications@github.com To: "keean/zenscript" zenscript@noreply.github.com Cc: "mbuliga" Marius.Buliga@imar.ro, "Comment" comment@noreply.github.com Sent: Tuesday, August 25, 2020 1:17:50 PM Subject: Re: [keean/zenscript] Chemlambda, graph theory etc (#48)

https://youtu.be/2RMXujk5AhM?t=1543

Gerald Edelman - Brain Dynamics to Consciousness (Full Lecture)

the kind of theory i'm into :)

-- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/keean/zenscript/issues/48#issuecomment-679937649

NodixBlockchain commented 3 years ago

Thanks for the input :) I can see from your answer it doesn't seem there is a very clear formal system to get from GRS to TRS or vice versa.

Weirdly enough, edelman doesn't even really get into neurochemistry at all, he get his whole brain theory based on a mixture of philosophy, darwinism and computation model.

In theory it sort of lead to a category / type system, he made a robot that he called darwin, following this sort of principles

Sans titre2 Sans titre3

He make a statement like programs make mistake, mistake are a full part of darwinist process, but they don't crash from mistake, they learn from mistake.

I already started to get into computational model close to lambda calculus, based on blockchain which use a graph system, it's more limited than full lambda calculs/turing machine, but i'm pretty sure it can get close, even if i'm quite in the dark regarding the computation model it entails.

Already from the stepanov book i get that you need at least a tree/graph parsing order algorithm additionally to the graph itself, because the way it's processed change the result. He give all the algorithm to formalise how two 'coordinated structures' can be tested for isomorphism.

But it's already pretty cool, because the graph can be represented very easily graphically, so it allow to completely edit the computation visually automatically from the graph, and then the code to execute the graph looks very much like a parser combinator in the end.

It's not completely operational yet, i already have skeletal animation with a 3D scene represented using a graph like this. Using this system i think i'm already pretty close to have a full match between modification in the graph and term/functional programming representation.

But it's very experimental, i'm not sure where it's going to lead to, but so far i get pretty interesting result from it.

But what i got from chemlambda, is that it has this goal as well to have dynamic system that show some form of resillance in a random chaotic environment, or can form some organisational pattern based on some structural constraint and organise elements from random input. (not sure how to put that more eloquently :D )

kinda like this video

https://www.youtube.com/watch?v=7l38QUJENhE

https://www.youtube.com/watch?v=6LZvIkM6JqY

mbuliga commented 3 years ago

As long as your GRS is restricted to tree traphs, it is indeed equivalent with a TRS, where you translate terms to their AST. But what if the GRS does act on a class of graph larger than trees? Then the equivalence breaks (in interesting ways).

Here is, related to the video of Edelman lecture, the part about neural groups, an experiment with two "neurons".

2neurons

available also in the collection here https://chemlambda.github.io/collection.html#159

These graphs are encodings of two linear systems with some inputs and outputs connected. Imagine something like having two lambda terms, say T1 and T2, and you mix their inputs (free variables) and outputs (values of the terms) in the sense that you feed some outputs into some inputs. Then you reduce this mix graphically.

keean commented 3 years ago

This is what electronics design tools like Symplectic do. The problem is that designs like this are very hard to reason about and verify. When I worked on VLSI designs we used to write a prototype in 'C' as it was much easier to validate, and then compare the data from the electronic system with the prototype for validation. The system I worked on was for MPEG encoding, so you would apply the same configuration settings to both systems and then compare the inputs and outputs bit for bit to validate the design.

The point I am making is that designing free input/output systems is much harder than writing sequential code. So a language like this will not get used much, because not many people will be smart enough to use it, and those that do will spend a lot of time validating designs.

On Wed, 26 Aug 2020, 09:33 mbuliga, notifications@github.com wrote:

As long as your GRS is restricted to tree traphs, it is indeed equivalent with a TRS, where you translate terms to their AST. But what if the GRS does act on a class of graph larger than trees? Then the equivalence breaks (in interesting ways).

Here is, related to the video of Edelman lecture, the part about neural groups, an experiment with two "neurons".

[image: 2neurons] https://user-images.githubusercontent.com/39514942/91279759-95092c00-e78e-11ea-8d41-32bf778d8740.gif

available also in the collection here https://chemlambda.github.io/collection.html#159

These graphs are encodings of two linear systems with some inputs and outputs connected. Imagine something like having two lambda terms, say T1 and T2, and you mix their inputs (free variables) and outputs (values of the terms) in the sense that you feed some outputs into some inputs. Then you reduce this mix graphically.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/keean/zenscript/issues/48#issuecomment-680740471, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAD4DYA5LC6Y3VQLNURQJBDSCTCFRANCNFSM4N6EKMMA .

NodixBlockchain commented 3 years ago

This is what electronics design tools like Symplectic do. The problem is that designs like this are very hard to reason about and verify. When I worked on VLSI designs we used to write a prototype in 'C' as it was much easier to validate, and then compare the data from the electronic system with the prototype for validation. The system I worked on was for MPEG encoding, so you would apply the same configuration settings to both systems and then compare the inputs and outputs bit for bit to validate the design. The point I am making is that designing free input/output systems is much harder than writing sequential code. So a language like this will not get used much, because not many people will be smart enough to use it, and those that do will spend a lot of time validating designs. On Wed, 26 Aug 2020, 09:33 mbuliga, @.***> wrote: As long as your GRS is restricted to tree traphs, it is indeed equivalent with a TRS, where you translate terms to their AST. But what if the GRS does act on a class of graph larger than trees? Then the equivalence breaks (in interesting ways). Here is, related to the video of Edelman lecture, the part about neural groups, an experiment with two "neurons". [image: 2neurons] https://user-images.githubusercontent.com/39514942/91279759-95092c00-e78e-11ea-8d41-32bf778d8740.gif available also in the collection here https://chemlambda.github.io/collection.html#159 These graphs are encodings of two linear systems with some inputs and outputs connected. Imagine something like having two lambda terms, say T1 and T2, and you mix their inputs (free variables) and outputs (values of the terms) in the sense that you feed some outputs into some inputs. Then you reduce this mix graphically. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#48 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAD4DYA5LC6Y3VQLNURQJBDSCTCFRANCNFSM4N6EKMMA .

yes ! it's not easy to write code like this, but the idea is that you don't build the graph by writing code, because the good thing about graph is that they can be very easily represented visually, and this sort of system works pretty well.

I had the chance to meet eric wenger, who is the author of meta synth/voyager etc for example, and this whole system works with graph that are edited visually and plugged into each others.

mux-deep-editor

I already applied this principle like to build audio synth, dynamic web page, and 3D scenes, i think the mind still can cop very well with 'recursive graph' system, when they are presented visually.

For it to be meaningul it needs some amount of prebuilt basic functional/logic brick that can be semantically meaningful to begin with, as well as their composition/combination.

The meta synth system from eric wenger show it's possible to have this sort of functions that can work on audio/video/3d (depth field rendering with some algebra), that can be combined together. Didn't dig the software too much, but he showed me some quick demo of the system and the possibilities are amazing :)

Well that's mostly step 1, already if i manage to get there it would already be a thing :)

But then the step 2 to go in the edelman kind of direction is that you don't actually write the whole tree, but it's more 'goal oriented', and the 'system' will work by trial and error to find the graph solution to the problem.

In the video there is also this slide

Sans titre5

And i think that's sort of the problem is that you can't really 'proove' too much why and how the solution work, or even if it's the best possible solution to the problem.

It would be similar to neural network, except neural network are complete black box, and they don't allow to extract meaningfull generalisation or categories or semantic by themselves, and often it require some pre step to prepare the data for the neural network to works well.

mbuliga commented 3 years ago

I agree, these GRS are not for humans, they are for machines or for understanding nature (biochemistry in this case).

On the other hand, at first there were only (few) mathematicians playing with bits in the emerging computer architectures.

----- Original Message ----- From: "Keean Schupke" notifications@github.com

The point I am making is that designing free input/output systems is much

harder than writing sequential code. So a language like this will not get

used much, because not many people will be smart enough to use it, and

those that do will spend a lot of time validating designs.

--

You are receiving this because you commented.

Reply to this email directly or view it on GitHub:

https://github.com/keean/zenscript/issues/48#issuecomment-680764900

NodixBlockchain commented 3 years ago

When i look at the definition of complex system, i think it touch on the point i'm looking into.

Like this definition of a complex system as in it cannot be explained only looking at separate elements on their own.

I'm not a chemistry buff, but i think i'm not wrong if i say you can't really account for the properties of complex molecular structure, at the most complex level let say living organism, only by looking at the set of atoms that compose it.

In term of stepanov element of programming, just looking at the set of atom would correspond to iterators, that are like a flat set of elements that cannot capture the particular organisation and how atoms are connected to form more complex molecules of which proparties not only depend on the set of atoms composing it, but also how those atoms are connected together.

Coordinate structure can capture this idea of molecular organisation, and stepanov consider those coordinate structure as a form of iterator or set, but they can capture more complex configuration between the elements of the set.

But then it would need a form of type reflection to be able to capture the properties of the said coordinate structure, because it's the type of the object/structure tree that will capture the information of how each elements is related to the other, and i can completely get kean points on the flaw of type reflection approach.

That said, inside of a compiler, it still have to hold 'type variables' in order to build the AST, so a compiler would be able to capture this information about how each elements of a set relate to another in a dynamic system, but once compiled this informations is lost.

It's why i started to look into GRS as a possible way to be able to analyze those relationship.

In a way there is already a form of this in direct show, as the graph builder is able to automatically connect a file input to an output in real time, but it has 2 layers above the c++ typing, there is a layer of COM for the base module interface encapsulation, and it adds also a 'media type' for each 'pin', which allow the graph builder to connect a file input to an output with all the combination of codecs/filters/image-audio format converters etc, depending on what is installed or not on the system, and can add new codec/filter/converters without having to recompile anything, which allow to build graph that are specific to the particular combination of input source/graphic-audio device and codecs present on the system that are not known at compile time.

It's this kind of things that would not be easy to do with languages like haskel. I don't see a really good way to do this with any language i know of.

Direct show is still a basic example, the system of eric wenger is more developped, i'm not sure if it could still be able to model 'complex systems', i think it's still a bit limited on this regard, but it already give an infinite number of combination possibility with a modular functional composition approach. And as the module are loaded and the graph built at runtime, this sort of system would allow to analyse this sort of coordinated structure of which particular organisation is not known at compile time.

With an approach based on typed module interface, and runtime/loadtime graph building mechanism, i think it could still give more opportunity to have graph analysis that would allow to capture the feature of 'complex system' defined as the whole exhibit properties that cannot be captured only by analysis the set elements that compose it.

I'm not sure if the way i put it is extremely clear though :) but it's something that is bugging me or obsessing me for while and i can't seem to find the good silver bullet system to explain this.

Chemlambda seems to be close, especially in the lense of the alchemy of fontanna and buss that i found on chemlambda related sites.

mbuliga commented 3 years ago

I'm not sure if the way i put it is extremely clear though :) but it's something that is bugging me or obsessing me for while and i can't seem to find the good silver bullet system to explain this.

Maybe process mining? https://en.wikipedia.org/wiki/Process_mining

I'm not a chemistry buff, but i think i'm not wrong if i say you can't really account for the properties of complex molecular structure, at the most complex level let say living organism, only by looking at the set of atoms that compose it.

I think the contrary (with some caveats). After all there exist universal Turing machines, we all use them, we can even build them. The tower of software we put on them is for our human convenience, to be able to easily program or interact with them as humans, not because we need all the huge pile for universal computation. Likewise, a cell is nothing more than a very special pattern of chemical reactions. When we explain (what we understand about) a cell, we invent, for our human understading purpose, all sort of names and analogies, but the cell functions well without the tower of semantic we inflict to it.

Chemlambda seems to be close, especially in the lense of the alchemy of fontanna and buss that i found on chemlambda related sites.

ALCHEMY is a very strange beast. Historically is very important, but otherwise is a system which disregards exactly what's the most interesting: computation. Indeed, in ALCHEMY is used a soup (multiset of) lambda terms _in_normalform and there is a template for an infinity of possible chemical reactions:

A + B = NormalForm(A B)

Therefore the most interesting part from the point of view of computation, namely NormalForm(), is just ignored.

Later Fontana used a pi-calculus, which is another fashionable way (along with category theory) to ignore how computation really happens. It's magical. Who cares? they say, because they are interested in the meaning of the computation, not in the computation itself. I believe that computation, concretely, is all that happens, semantics attached or not. But how it happens in nature?

I'm not even that strange, there is an old fight here: TOC vs TOP https://pron.github.io/posts/what-we-talk-about-when-we-talk-about-computation

I am TOC not TOP :)

Differently from ALCHEMY, chemlambda uses graphs, some of them can be obtained with a parser graph = lambda2mol(term), but in general it does not matter, and purely local graph rewrites (i.e. they can be executed by local machines, like enzymes) as chemical reactions. The computation is central here.

What is left, and you may consider this a critique, is how the graphs are built, before they engage in random chemical reactions.

NodixBlockchain commented 3 years ago

Who cares? they say, because they are interested in the meaning of the computation, not in the computation itself. I believe that computation, concretely, is all that happens, semantics attached or not. But how it happens in nature?

A quick tangential answer, but somehow that can be also a valid concern, i remember a while back i was argueing with a free mason around pythagora, trying to make a sort of point like this, that computation is all what matter and he answered me something like computation like pythagora is a solution to an old problem that is as old as mankind and not a thing of its own.

I think it's also a bit the point that edelman makes all along, and even rieman i think made that sort of argument, that even computation or any kind of theorical science is maybe more a solution to a problem of the mind than really something reality needs to function.

It's more the question of how useful those concept are to solve humans problems that matter rather than how 'real' it is, i think it's a bit related to the degeneracy thing edelman talks about as well.

Maybe at neurological level, not two persons will have the same neural pattern that is used to represent the 'computation' or whatever is going on.

Very interesting answer otherwise, i'm going to dig all this more :)

For quite a while, I’ve been disturbed by the emphasis on language in computer science… Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. … State machines… provide a uniform way to describe computation with simple mathematics. The obsession with language is a strong obstacle to any attempt at unifying different parts of computer science.

That seems like something stepanov would agree with :)

Computer scientists collectively suffer from what I call the Whorfian syndrome — the confusion of language with reality…Many of these formalisms are said to be mathematical, having words like algebra and calculus in their names. … Despite what those who suffer from the Whorfian syndrome may believe, calling something mathematical does not confer upon it the power and simplicity of ordinary mathematics.

good point as well :)

Well originally i started to get into this because i started to develop some micro kernel architecture, and i needed something to smash different modules/drivers together, compiled indifferently from windows/visual studio, or gcc, or any other compiler, and add new modules/drivers without having to recompile the whole tree. Like with the pci bus modules are matched with vendor/device ID, and that's all. Even using precompiled NDIS modules for network device or such.

But then i noticed the same structure works for filesystems, for windows system, for lot of things actually, even at the core web applications like say joomla or wordpress can be similar in that way of adding modules and configuring the system to load them together.

With also this possibility to log/debug the particular configuration used at some point when lot of loading and configuration is made more or less automatically, or can become relatively complex without being directly specified in a program.

But then i pushed it further to have decentralised/distributed computation.

Thanks for the answer, lot of food of thoughts :)

NodixBlockchain commented 3 years ago

I'm not even that strange, there is an old fight here: TOC vs TOP https://pron.github.io/posts/what-we-talk-about-when-we-talk-about-computation I am TOC not TOP :)

The conclusion i reached is if you want to stick to a pure computation model, there are 2 things you are going to miss, which are streams, open/read/write/close are not really computation, and have not really much equivalent in mathematics, natural science or physics, and interupts in that sense of having procedures that can be executed at random moments, such as input devices for UI, or other hardware devices that work asynchronously with the main cpu. Can even add threads to this if targetting multi core, but if avoiding shared mutable states, i guess it can also come down to an interupt paradigm.

There can be way to get around those, but to me it looks all the ways that 'programming langages' departs from a pure computation models are either to handle streams or interupts. Those are not described in lambda calculs or computation models, and it's where all the clumpsyness comes from in most languages.

keean commented 3 years ago

@NodixBlockchain The computational model for streams is called "codata". Haskell deals with this very nicely - lazy computation and codata go together very well, just like eager computation goes with data.

The problem is, like with most pure computational models, efficiency is difficult to achieve, and hard for programmers to understand.

One nice solution that keeps everything eager is 'coroutines'. Coroutines can be modelled at a high level, see JavaScripts 'generator functuons' and Go's 'goroutines'.

Things like open/read/write can be modelled at the type system level by something I would call a 'protocol'.

Fundamentally the difference between 'logic' (mathematics) and computation, is that mathematics has no 'time'. A proof is either true, false or unprovable, there is no before and after. In computation 2 + 2 is not 4 until the expression is reduced, so there is a before "2 + 2" and and after "4". This is what Turing contributed to mathematics.

So a pure representation of something must be "true for all time" and declarative. We represent the state machine in the type system: the type of a new file is Unopened, after opening its Open, and after closing its Closed. The transitions are triggered by functions (messages) but we want to tie these to the File type.

data Unopened
data Open
data Closed
class File a
instance File Unopened
instance File Open
instance File Closed
open :: Unopened -> Open
read :: Open -> (Open, Buffer)
write :: Open -> Buffer -> Open
close :: Open -> Closed

I am not sure this is the best we can do in Haskell, but it has the problem that the functions are not really tied to the typeclasses. We could imagine writing a protocol like this:

protocol File (Unopened | Open | Closed) where
   open :: File Unopened -> File Open
   read :: File Open -> (Buffer, File Open)
   write :: File Open -> Buffer -> File Open
   close :: File Open -> File Closed

Which is sort of a cross between a type-class and a Generalised Abstract Data Type (GADT).

NodixBlockchain commented 3 years ago

Am i right if i say codata and corecursion will work mostly if there is a sort base case that will be the same for all values generated ? Like if the stream is essentially an iterator ?

Because the main problems is that streams are not necessarily defined following a type definition, and there are all the serialisation format, like ASN, protobuf or BNF, that can become even a grammar of their own, or streams can just be random junk.

Even beside this, when dealing for example with sockets, the behavior of read/write will depend on the socket protocol, or the operating system, and there can be many way to handle a situation. To me it still seems it lacks the kind of concept like stepanov would put it to make the compiler able to check if the behavior of the socket manipulation algorithm is 'correct', or if the algorithm actually implement the desired behavior.

Even more so if you get asynchronous stream answers.

Even with goroutine if the channel is actually a socket, there are many way to handle a block, or a read/write return value, many flags/options that can be used alongside with sockets that will change the behavior and how the algorithm should handle their return value and what there will be in the buffer.

To me the solution could be alongside having a completely different abstraction in the language to replace the stream abstraction with something else, and the compiler would generate the code using the operating system function that correspond to the desired behavior.

Didn't dig GO too much, but it still seem quite limited regarding all the different manner a socket can be handled.

NodixBlockchain commented 3 years ago

To get a bit more in depth, the problem as i see it is almost all file format or network protocol will have first an header that will contain a least the type of the data and its length, then the data itself.

Which mean you will have at least two successive read operation that depend on each other. And a read operation that return zero or an error doesn't necessarily mean the whole data has been processed. (file can be damaged/truncated, the connection can be stopped for any reason).

Eventually you'll have a layer of compression or encryption on top of it, or things like chunked transfer (Each chunk is preceded by its size in bytes. The transmission ends when a zero-length chunk is received. )

Eventually there is the system like in avi/mp4/ogm where there are "virtual streams" multiplexed inside a physical stream.

The idea would be alongside being able to define stream layout, and a processing algorithm, with something that allow to control the progress, let say in byte.

And then being able to use the stream definition as an input for computation.

I'm pretty sure with a system like this, the open/close points could be implicit and determined by the compiler, alongside with some error management, integrated into a 'concept' for the whole stream processing algorithm, in sort that the whole algorithm can be compile-time checked.

Then maybe there would be this problem that certain kind of stream processing algorithm could not be checked for termination or such, and it would need some form of restriction to make sure the algorithm terminate.

I don't know of any langage that really allow to do this properly, even GO, maybe their approach is something alongside the line of limiting stream operations to the kind that can be checked properly, but i don't think GO has that much feature for complex protocol definition and deserialisation that can be integrated into a computation.

mbuliga commented 3 years ago

@NodixBlockchain

There can be way to get around those, but to me it looks all the ways that 'programming langages' departs from a pure computation models are either to handle streams or interupts. Those are not described in lambda calculs or computation models, and it's where all the clumpsyness comes from in most languages.

If I understand well, in the background of this discussion is the hope that we (somebody) will make some day a truly decentralized computer. Which has I/O and it is a true computer in the sense that it can do anything (computable).

The discussion is then that somehow an attempt like this may lead or led to some block, where there is no solution or there are solutions which turn a decentralized system into a centralized one, in time or in space. The discussion then turns to details and after a while the participants or the readers are lost.

This comment is to make a (perhaps well known) statement that a decentralized computer based on a term rewrite system (like lambda calculus is) is simply not possible from first principles. You can't mend it, nobody can.

On the contrary, a decentralized computer based on a local graph rewrite system (like Lafont Interaction Combinators, the pure version, or chemlambda or chemSKI, etc) is possible. The problem here is that there are no programmers to efficiently use it (yet).

So, like it or not, the goal to make a decentralized computer by using what programmers like, it is as possible as the transmutation of elements were for alchemists.

keean commented 3 years ago

I like clojure's approach to exactly these kind of systems, which it calls an "Epochal timeline" I think.

Clojure is dynamically typed, but they have developed "spec" to specify wire protocols. I also note that the creator of Clojure says that the internals of programs should reflect their externals (paraphrased), so to me this implies that modules should also have APIs defined with "spec".

The state model I have been proposing to investigate for my language turns out to be the same as Clojure's Epochal timeline, except I want to allow controlled mutability (different topic however). The main difference in approach is that I want to capture the typing of protocols and modules in a type system rather than a tool.

Basically in my model/terminology a 'protocol' is a type that represents the exchange of data between two programs. Calling an API method in a local library is a very simple example of a protocol. Calling a remote procedure call, or a RESTful API with JSON data is a other slightly more complex protocol. The most complex protocols involve more than two programs, and each is maintaining an internal state that affects the messages it can send it receive.

NodixBlockchain commented 2 years ago

https://www.academia.edu/27047824/Emergence_Oriented_Programming

Emergent properties arise from the numerousinteractions of many simple agents. It is difficult toencapsulate emergence in a traditional programminglanguage. An object-oriented language such as Java iscapable of encapsulating the properties and behavior of asingle agent within a class instance, or object. Thus, Javaprovides a specific language mechanism for modularizingagents. However, emergent structure does not exist withina single agent, nor does it result from the execution of asingle agent behavior; rather it arises from the interactionsof many agents over the course of time. Thus, toencapsulate emergence, we need a program languageconstruct that can express modularity across a set of objectsand a set of object interactions. This requires encapsulationof a set of classes and class method invocations. Object-oriented languages do not support such modularity.A new programming language paradigm has beenintroduced in recent years to address such “cross-cutting”concerns[7]. Aspect-oriented programming (AOP) is alanguage model that supports the encapsulation of concernsthat cross the object-oriented class boundary. AOP providesdirect language support for modularizing emergentstructure and behavior. For example, an aspect may bewritten that describes a set of method invocations, whichmay occur in multiple classes. Emergent behavior maythus be recognized when such a set of method invocationsoccurs during a program execution.

4.1

 

Using Aspect-Oriented Programming

Aspect-oriented programming allows theimplementation to accurately reflect the conceptual layersfrom the cognitive model by creating modular structuresthat encapsulate the layers on the software side[7]. Itsupports modularized code that can be mechanicallyinserted into executable code at specific, user-definablelocations. The key point is the set of locations can not beencapsulated modularly at the lower layer because theprogramming language at the lower level can not provide aconstruct for describing the set of execution points. Forexample, if we want to perform some logging operation thatshould occur directly before a particular set of invocationsof the method foo( ), we can either manually insert codeprior to each invocation of foo( ) throughout the Javaprogram, or use a higher level language such as AOP thathas a structure to express all invocations of foo( )simultaneously. By intertwining the new AOP modulesexpressed at the higher level into the executable bytecodes(in the case of Java) the original source is untouched, yetintegrates the desired functionality in the resultingexecutable. Many aspects can be applied to the same code,and aspects can be applied to other aspects. This powerfulmechanism provides a direct, tangible representation of thelayers within the emergence model described in section 3.We use the AspectJ compiler[1] to develop all code relatedto the visualization and implementation of downwardcausation of emergence, thus implementing the emergenceframework without tangling the agent and swarm models

https://archive.org/stream/ArtificialLife14/Artificial%20Life%2014_djvu.txt

Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems.