keean / zenscript

A trait based language that compiles to JavaScript
MIT License
42 stars 7 forks source link

Macros and DSLs #31

Open shelby3 opened 7 years ago

shelby3 commented 7 years ago

A homoiconic programming language can parse itself to an AST expressed as native data that can be transformed within the language, evaluated, or isomorphically serialized to human readable syntax.

I don’t think the sacrifices in syntactical and grammatical clarity caused by reduction to minimum consistent grammar such as hierarchical lists of lists s-expressions, is necessary to achieve the homoiconicity which enables macros and DSLs. Although Lisp's low-level semantics distills to Seven Primitive Operators, which makes its AST eval simple to implement, the complexity of higher-level semantics which can be expressed (and thus potential semantic complexity of a macro's semantics) is not limited any more so than for any Turing-complete language.

Requirements:

  1. Self-hosted compiler.
  2. Bracketed quotes.
  3. Type inference localized to expressions.
  4. Everything-as-an-expression syntax so that we can select the parser based on the data type the expression is assigned to.
  5. Incremental AST compilation whilst parsing so the compiler knows the data type for #​4.

For macros we’d only need #​1 - 3, because macros take some or all of their arguments as AST references. We wouldn’t want macros to depend on wide-scale inference as this could result in live or dead locks on inference decidability. K.I.S.S.. For DSLs such as a SQL query language, we need to select a customized parser. If we know the data type of the expression being parsed is for example SQL.Query and if that type has associated a customized parser and separately compiler, then that parser is run to create the customized AST of the said expression (and if it is not a macro then the associated compiler is run). There are cases where the expression determines its own type (i.e. the expression has side-effects and it is not assigned or the type of the reference it is assigned to it inferred from the expression's type), so the current parser is presumed. This strategy would not work with any wide-scale type inference, which is yet another reason I am against non-localized type inference.

A customized parser and compiler could invoke another parser and compiler for an expression within its grammar (perhaps using a similar strategy of selecting by target data type or by explicit delimiters, e.g. the use of curly braces in React JSX), and this could even be recursive.

keean commented 7 years ago

For macros we only need #​1 - 3

I don't want macros :-) I don't want a second weaker version of application in a language.

What you are basically saying is that the data format for expressing objects combined with the AST is the language syntax. I am not sure I want this - as the AST currently features things like application nodes which you would not want in the language syntax. I think homoiconicity is over-rated, and I would rather have a nice user friendly syntax without it compromising the AST, and vice versa.

shelby3 commented 7 years ago

I don't want macros :-)

Many programmers want them. Also DSLs.

a second weaker version of application

Huh?

as the AST currently features things like application nodes

Huh?

I would rather have a nice user friendly syntax without it compromising the AST, and vice versa

Me too. I didn't see any flaw in my proposal for macros which would prevent that. Perhaps you misread the OP.

keean commented 7 years ago

Macro's are bad and are included in languages because normal function application is not powerful enough. If you want a function write a function.

shelby3 commented 7 years ago

Making things too powerful is often worse than a proper separation-of-concerns.

I doubt you can achieve the same things that macros and DSLs can achieve without making the type system so powerful and obtusely convoluted (and with mind bending and numbing type inference) that nobody can understand the code any more, not even the person who wrote it. K.I.S.S..

keean commented 7 years ago

Macro's provide an escape hatch to bypass the type system (in some languages), this is a bad thing. Messing about with the program code itself using macro splicing is a bad idea, just write a function to do the thing in the first place. With generics you should never need a macro anyway.

keean commented 7 years ago

Macros also make debugging harder as they are applied at compile time. Any errors reported at runtime will be in the post macro processed code, so the line numbers and context cannot be found in the program source. I have yet to see one compelling use of macros in the last 20 years (and this comes from someone that wrote their own macro-assembler and used to think that macros were all you needed to build a compiler and a high level language).

keean commented 7 years ago

Maybe you can persuade me, if you can provide an example of a macro that cannot be more cleanly implemented in another way?

shelby3 commented 7 years ago

Macros enable editing the source code of the arguments before they are evaluated. A functional lazy evaluation of those arguments will not enable the transformation of their source code. Macros can thus perform some automated operations based on the structure of the source code within its arguments, not just the evaluated value of the arguments. That is turtles all the way down recursively.

For example, if we have list of functions we want to call with different numbers of arguments, they each have different types, and we want to add new common argument to the end of each of these calls, and collect the results in a list. Much easier to call a macro to do that, as it doesn't need to fight with the type system, as the type checking will be performed after the macro has done its code transformation. This example could also be done with partial application without a macro, except the typing can still get unwieldy, because those functions might have different type signatures for remaining default arguments or we have to manually specify the default arguments. Macros are basically a way the programmer can rescue himself from the fact that the language development moves too slowly and can't always do everything that you and I could imagine it might do one day in the distant future.

Macros are basically useful any where you need to do transpiling, so you don't have to write your own whole program compiler. And there have been many times I wanted to transform code, e.g. to convert the if as an expression to a lamba function compatible with TypeScript. If TypeScript had powerful macros, I might already be done with some of the features I wanted for my transpiler! Instead I will have to reinvent the wheel.

Also much of the functionality needed for macros is also needed for evaluating and running code sent over the wire, i.e. dynamic code compilation and modularity. We can't expect to link everything statically in this era of the Internet.

DSLs enable friendlier syntax such as SQL queries that are checked grammatically by the compiler.

Edit: a possible new idea for the utility of AST macros: https://github.com/keean/zenscript/issues/11#issuecomment-305997607

keean commented 7 years ago

Macros enable editing the source code of the arguments before they are evaluated.

I know what macro's are, I am saying there is always a better way to do the same thing, without having source-to-source rewriting going on in the program code, which obfuscates errors and results in another case of write-only-code.

shelby3 commented 7 years ago

there is always a better way to do the same thing

Someday. But someday can be too many years (even decades) too late. Refresh.

keean commented 7 years ago

I have not written a macro for about 20 years, so that someday was decades ago :-)

shelby3 commented 7 years ago

I don't limit my analysis to only your anecdotal limited needs. I’m interested in mine and those of others.

I think DSLs are the most important thing that can't be done reasonably without transforming code. I explained to enable the "macros" (parser + compiler transformation of code) for those based on detecting the type of the expression.

Show me how to write a SQL query DSL (without any noise in the syntax that isn't in the normal SQL syntax!) that is grammatically correct at compile (evaluation) time, that can be done without code transformation. Scala can sort of finagle it by use infix function names as keywords, but that is not compile-time parsing of the SQL specific syntax issues.

keean commented 7 years ago

You do not need macro's to create DSLs.

Show me how to write a SQL query DSL

Sure I did it in Haskell back in 2004 :-)

shelby3 commented 7 years ago

Sure I did it in Haskell back in 2004 :-)

I am not interested in your obtuse HList gobbledygook.

A DSL should be elegant and easy to comprehend and work correctly. I asked you to show me how to do this, not asking you to brag.

keean commented 7 years ago

What's obtuse about this:

moveContaminatedAnimal :: DAnimalType -> DCntdType -> DFarmId -> Query ()
moveContaminatedAnimal animalType contaminationType newLocation = do
    a1 <- table animalTable
    a2 <- restrict a1 (\r -> r!AnimalType `SQL.eq` animalType)
    c1 <- table contaminatedTable
    c2 <- restrict c1 (\r -> r!CntdType `SQL.eq` contaminationType)
    j1 <- join SqlInnerJoin a2 c2 (\r -> r!AnimalId `SQL.eq` r!CntdAnimal)
    j2 <- project j1 (CntdAnimal .*. HNil)
    doUpdate animalTable
        (\_ -> AnimalLocation .=. toSqlType newLocation .*. HNil)
        (\r -> r!AnimalId `SQL.elem` relation j2)
shelby3 commented 7 years ago

That doesn't look like unadulterated SQL to me! And you are filling up my thread with (obtuse Haskell and braggart) noise!

You have no chance in hell of creating anything popular...

...if you think that gobbledygook is more readable than SQL syntax for the person who wants to express SQL.

keean commented 7 years ago

That's relational algebra :-)

Here's an example:

R1 = join(CSG,SNAP)
R2 = select(CDH,Day="Monday" and Hour="Noon")
R3 = join(R1,R2)
R4 = select(R3,Name="Amy")
R5 = join(R4,CR)
R6 = project(R5,Room)

Taken from here: https://www.cs.rochester.edu/~nelson/courses/csc_173/relations/algebra.html

You have no chance in hell of creating anything popular.

C# LINQ seems pretty popular and its based on exactly the same ideas.

shelby3 commented 7 years ago

That's relational algebra :-)

It's not elegant. It's gobbledygook.

keean commented 7 years ago

I don't care! Its not elegant.

Its very elegant. Relational algebra avoids several problems that SQL has as a language. SQL is not a proper algebra, its got a lot of ad-hoc elements that are pretty bad.

shelby3 commented 7 years ago

It's not SQL.

I asked you to show me how to do unadulterated SQL without a code transformation.

keean commented 7 years ago

Its better then SQL. Why would I recreate a flawed query model along with all its problems. I took the opportunity to so something better.

In any case you are not going to be able to parse SQL syntax in the host language with macro's either because you cannot represent SQL in the languages AST.

shelby3 commented 7 years ago

Whether SQL is good or not is not germane to the point of this thread, which is whether it is possible to model the unadulterated syntax of another language without a code transformation. Yes or no?

In any case you are not going to be able to do that with macro's either because you cannot represent SQL in the languages AST

Incorrect. Re-read the OP.

keean commented 7 years ago

No, and its not possible to model it with code transformation either.

shelby3 commented 7 years ago

its not possible to model it with code transformation either.

Incorrect.

keean commented 7 years ago

prove it :-)

shelby3 commented 7 years ago

prove it :-)

Re-read the OP. It is explained there.

keean commented 7 years ago

That's not a proof, I want code :-)

Its not possible without custom parsers that can be implemented for new types of literals.

shelby3 commented 7 years ago

Its not possible without custom parsers that can be implemented for new types of literals.

Which is exactly what I wrote in the OP.

shelby3 commented 7 years ago

Back on examples where macros can’t be done with functions, there is no way to write a function will execute its argument expressions in the lexical scope of the caller and provide the name of the variables as a separate argument. There is no way to form that functional closure with the replacement lexical binding without a code transformation.

Paul Graham’s anaphoric macro eliminates the boilerplate of lambdas. It couldn’t be done without boilerplate nor macros with Scala _ shorthand for lambdas that take only one argument if the it is within another lambda.

Another example is when you want to declare numerous types and/or functions (in the lexical scope of the caller) based on some pattern (that would be an absolute pita to manually enter from the keyboard). A function can’t declare new types and functions. You can only do this with code generation.

There are instances where only code transformation will suffice.

keean commented 7 years ago

there is no way to write a function will execute its argument expressions in the lexical scope of the caller and provide the name of the variables as a separate argument.

And why would you ever want to do that?

There is no way to form that functional closure with the replacement lexical binding without a code transformation.

Which is again the kind of trick I don't want to allow. If I wanted that kind of thing I could provide first class environments, but its just makes code harder to understand for no real reason.

What algorithm do you need the above to implement? Does it help me implement a sort function?

Another example is when you want to declare numerous types and/or functions (in the lexical scope of the caller) based on some pattern

Needing to do this is a symptom of a lack of sufficient abstractive power in the language itself. The 'numerous functions' should be a single generic function.

Generics remove the need for macros in the language by providing a type safe way to abstract code over parameters.

shelby3 commented 7 years ago

And why would you ever want to do that?

What algorithm do you need the above to implement?

To eliminate repetitive boilerplate.

The 'numerous functions' should be a single generic function.

That doesn’t apply to function calls and data instance declarations. To eliminate repetitive boilerplate.

I agree that macros create a new syntax, which leads to less readability (i.e. less people who readily understand the syntax). But it is not for me to decide when and where the scenarios dictate the value of their usage. Note however though that typing systems and function application can become so complex also to reason about in order to facilitate certain coding patterns (e.g. that Scala and Haskell gobbledygook), that it is same effect of essentially a new syntax (although I understand that tools can perhaps understand and perhaps aid, but not the higher-level semantics). Nevertheless, code transformation macros are a lower priority for me than DSLs. DSLs are also code transformation but they employ a custom parser and AST (before they’re transpiled); whereas, macros (as I’ve envisioned them) reuse the AST and parser of the main language.

And as for DSLs such as SQL have in some cases very wide appeal and readability.

shelby3 commented 7 years ago

Any errors reported at runtime will be in the post macro processed code, so the line numbers and context cannot be found in the program source.

This is not a problem that can’t be rectified with a @pragma in the generated code.

C# LINQ seems pretty popular and its based on exactly the same ideas.

It is not germane to this thread whether someone can invent an API and/or new syntax which is popular among some programmers. Rather it is germane whether we can also offer for example unadulterated SQL syntax. Nothwithstanding that SQL is afaik much more widely known (and thus readable) than LINQ.

keean commented 7 years ago

and put an arrogant smirky face 😏 on it to rub it in the wound you've opened

I can see you interpret that differently, from "friendly disagreement".

I just wish you’d organize yourself better and not be so flippant.

I'm not writing a formal journal paper. This is a friendly informal discussion (at least i think it is), with the only restriction being to try and keep on topic. I think my position on macros is fairly clear, and i had assumed that because you had not yet mentioned macros in any of the other discussions, that this was not something you felt was really core to the language. I had previously posted a manifesto for the language that did not get big objections, so i thought we were at least interested in the same kind of things, buy were having difficulties settling on syntax, which is understandable because i an not even sure what i want for syntax in all areas, there are so many compromises. Macros go against the principles of readability and directness in such a fundamental way, i was shocked that you brought them up, out of the blue at this late stage.

That doesn’t apply to function calls and data instance declarations. To eliminate repetitive boilerplate.

That's what generic modules are for.

But it is not for me to decide when and where the scenarios dictate the value of their usage.

It is when creating a new language. Python does not have macros, does that seem to limit the libraries and applications available?

shelby3 commented 7 years ago

Macros go against the principles of readability and directness in such a fundamental way, i was shocked that you brought them up, out of the blue at this late stage.

We actually briefly mentioned macros and the problems they create for readability in our past discussions on this issue threads, and I had agreed.

I am raising the issue of macros and DSLs now because I want to have a holistic analysis on the extent and manner of integrating other syntax, such as a markup language(s). This is relevant because React's JSX is very relevant to the JavaScript ecosystem right now. I received a private message this past week from a developer who is interested to code apps for my blockchain project, and he cited several JS frameworks he uses, one of which is React-like and employs JSX.

I can see you interpret that differently, from "friendly disagreement".

I could definitely detect your “shocked” as it hit me as defiance, combativeness, and a touch of arrogance.

Btw, I am not trying to influence (other readers or even you on) the direction of your project. I post in your project's current discussion forum (the issues threads), because it is the only place I can currently get peer review (predominantly from you). And as a continuation of all the discussion we did before, to try to have all my latest understandings/proposals/conclusions in one place for now (for a lack of a better place). I highly doubt that we will be working on the same language going forward. We'll go off on separate directions due to different priorities, and if one of makes vastly superior progress (especially in terms of adoption), then perhaps the other will abandon/merge his fork. (e.g. you want to explore typing research such as improved row polymorphism and I don't have time for that right now)

Besides, neither of us is going to get any community momentum until we actually produce a tool that can be used (especially for real projects).

I have no problem with you stating your stance unequivocally and frankly, but I don't think it is necessary to make it so personally combative. I think it is possible to be cordial and help others feel good about working with you.

For example, you could have started your first post, Frankly I am a bit shocked and maybe even a little bit put off that you would raise the issue of macros after we expended many months on what I thought was core concept of employing genericity, typing, and God smacked type inference to perfect static typing even though nobody else in the entire world has ever been able to figure out how to do so without gobbledygook and avoid needing to break out of the typing system. But I am a God and Superman so just listen to me and STFU”.

At least then I could have just laughed, because you would have been honest.

If there is one thing that really turns me off, that is (incorrect!) brow beating and an ambiance of "holier than thou" stench. As I said, unless the other side is off the rails, then we'd need to be absolutely certain in order to be justified to be brow beating the other about some technical nirvana. I don't think there is any absolute technical nirvana. When technology becomes a religion, I am gone.

It just felt like you were trying to say I was completely wrong and silly for making the thread. And I think you've just admitted that is what you felt. So at least you are being honest and I am too.

Bouncing ideas off colleagues can either be bright, airy ambiance of an open-minded discussion or it can be a dark, bloody, depressing religious holy war.

I am somewhat interested in exploring better static typing. But with trepidation.

I am also contemplating the likelihood that by being more cautious about adding extensively expressive strict (as contrasted with TypeScript's sometime heuristic, e.g. subtyping variance) static typing, I am probably more aligned with the philosophy of JavaScript's ecosystem than someone who wants a better Haskell + C = Rust and is contemplating Rust + PureScript = ZenScript formula. But neither of us knows yet what the sweet spot should be for the next great move forward in mainstream programming languages. I am trying to find my way forward by keeping my eyes and ears open as I navigate the design process simultaneous with an urgency to complete a real world project. I am trying to keep up also with your (and @skaller's) far ranging research level design interests, but to some extent I can't entirely do that while also doing the former.

That's what generic modules are for.

The devil is in the details of that. The level of complexity that will be required to express most of the variants that could be coded with a macro and still you will NEVER be able to express all possibilities. NEVER.

There is more than one perspective or way to attack a problem set.

From my perspective, it is better to have a discussion about the tradeoffs of something, so that those on either side of the issue can refer to the enumeration of the issues. That is superior to "STFU and go away" or any demeanor or ambiance which contributes to demoralizing those who just want to have a open-minded inspection and thus understanding.

Your perfect static typing heaven might be another person's hell. I don't even know how all these extensive typing ideas of yours will work out. All I know is you can't have a total order on typing and thus there will ALWAYS be cases where the programmer needs to break out of the typing system. ALWAYS. Mark my word.

A typing system is at best a set of hints and partial checks on correctness. It is never completely consistent. There will always be cases where one has to do something outside the statically checked type system.

Don't forget Godel incompleteness theorems, that any complete system of axioms is inherently inconsistent. Total orders don't exist.

In short, I think maybe you are chasing perfect rainbows. You probably thought macro-assemblers were the holy grail 20+ years ago, then Haskell and pure functions in the prior decade, and now on to the next failed attempt to attain absolute nirvana...

...I am just trying to understand attributes of paradigms and relate to use cases. And understand what programmers want. Understand what is popular and why. Etc...

And mostly I am just trying to understand what if anything can accelerate and help my current programming needs. And integrating markup language is probably one of those high priority items for me, so I wanted to summarize how it fits into holistically with DSLs (and macros relate to DSLs).

Python does not have macros

https://www.quora.com/Does-Python-have-macros-If-so-how-do-they-work

http://stackoverflow.com/questions/15669921/adding-macros-to-python

and i had assumed that because you had not yet mentioned macros in any of the other discussions, that this was not something you felt was really core to the language

Again I didn't realize in the past that for example type inference might interfere with how to best integrate a markup language. That is the next issue I want to discuss in this thread.

shelby3 commented 7 years ago

The prior discussion has caused me to more fully understand the duality of a total order of compile-time typing versus the partial orders of run-time typing.

keean commented 7 years ago

But I am a God and Superman so just listen to me and STFU”.

I would never claim that. I know i am just as likely to be wrong as the next man. Everything is my current opinion, and could change at any time.

you will NEVER be able to express all possibilities. NEVER

You already can, that's what Turing completeness is about. We already know you can write any program that could ever be written in a Turing complete language. We are discussing differences in readability, structure and style here. Nothing s as black and white as you suggest.

In short, I think maybe you are chasing perfect rainbows. You probably thought macro-assemblers were the holy grail 20+ years ago, then Haskell and pure functions in the prior decade, and now on to the next failed attempt to attain absolute nirvana

I don't think so, because i have spent that time actually writing code in those paradigms. As such I have already been writing programs using the model i am proposing, although through a layer of annoying boilerplate. Even HList stuff is elegant if you don't need the boilerplate you need in Haskell. My interest in language design started after the HList paper when i saw the amount of boilerplate Haskell needed, and things became clearer for me after reading Stepanov and seeing the amount of boilerplate C++ needed to do things. To do HList stuff in C++ required two layers of extensive boilerplate, one at the Haskell level of abstraction, and one at the C++ level of abstraction. What i have been trying to achieve hasn't really changed since the OOHaskell paper.

keean commented 7 years ago

so is static typing (e.g. lower-level dependent typing divide-by-zero or higher-level semantic errors).

You can use static refinement types, so that the type of a division is:

div : (int, x : int | x != 0) -> Int

Now of course the question is how do you turn an int into an int | x != 0, and the answer is with path dependent types like this:

if x != 0 then 
   // x has type (int | x != 0)
else
   // x has type (int | x == 0)

So using the div function forces you to add a zero test, but only one zero test, so type inference and type propagation mean you can force the caller of a function to do the test before calling the function.

shelby3 commented 7 years ago

So using the div function forces you to add a zero test

Yes of course I also wrote about that 7 years ago for my Copute language research, but the point I made upthread is that is becomes too noisy, so instead we prefer run-time typing and generate an exception on divide-by-zero or overflow conditions. There is a trade-off consideration, and static typing is not always the chosen optimum.

It is important to relate this to the next comment from me about unbounded nondeterminism. If we do model all divide-by-zero and overflows as types, then we have to figure out how to maintain a meaningful semantics with those errors, otherwise they are still exceptions. And exception is something that puts the program in an indeterminate state.

shelby3 commented 7 years ago

You already can, that's what Turing completeness is about.

Incorrect.

Bounded nondeterministic Turing-completeness can’t model unbounded non-determinism (aka Hewitt’s indeterminism), i.e. the unbounded entropy of permutations in the universe. I had already explained that to you in the Concurrency issue thread and I provided a link to Carl Hewitt’s video where he explained that.

The point is what we can not model all of unbounded permutations of composition. Sometimes we will have to punt and accept run-time exceptions (where the program is in an indeterminate state outside of the safety of the static type system). We can not statically type everything.

Additionally, expression should not violate SPOT (single-point-of-truth) and solve the Expression Problem and otherwise not require factoring. To state we can express something in a non-SPOT, refactoring hell, is not that germane to PL design.

In short, I think maybe you are chasing perfect rainbows. You probably thought macro-assemblers were the holy grail 20+ years ago, then Haskell and pure functions in the prior decade, and now on to the next failed attempt to attain absolute nirvana

I don't think so, because i have spent that time actually writing code in those paradigms. As such I have already been writing programs using the model i am proposing, although through a layer of annoying boilerplate. Even HList stuff is elegant if you don't need the boilerplate you need in Haskell. My interest in language design started after the HList paper when i saw the amount of boilerplate Haskell needed, and things became clearer for me after reading Stepanov and seeing the amount of boilerplate C++ needed to do things. To do HList stuff in C++ required two layers of extensive boilerplate, one at the Haskell level of abstraction, and one at the C++ level of abstraction. What i have been trying to achieve hasn't really changed since the OOHaskell paper.

And this is apparently why complete type inference appears to be such a critically important priority to you.

I think I am onboard the benefits of typeclasses, but I still need to understand fully the ramifications of higher-ranked callbacks and typeclasses.

I still need to try to quantify better at what high-level of semantics we might need (or prefer) dynamic typing. At the low-level, I think the programming and hardware design community long ago decided to prefer runtime typing and thus exceptions for divide-by-zero and overflow errors.

shelby3 commented 7 years ago

What's obtuse about this:

moveContaminatedAnimal :: DAnimalType -> DCntdType -> DFarmId -> Query ()
moveContaminatedAnimal animalType contaminationType newLocation = do
    a1 <- table animalTable
    a2 <- restrict a1 (\r -> r!AnimalType `SQL.eq` animalType)
    c1 <- table contaminatedTable
    c2 <- restrict c1 (\r -> r!CntdType `SQL.eq` contaminationType)
    j1 <- join SqlInnerJoin a2 c2 (\r -> r!AnimalId `SQL.eq` r!CntdAnimal)
    j2 <- project j1 (CntdAnimal .*. HNil)
    doUpdate animalTable
        (\_ -> AnimalLocation .=. toSqlType newLocation .*. HNil)
        (\r -> r!AnimalId `SQL.elem` relation j2)

It could still be made more elegant with a fully integrated DSL syntax transpiled seamlessly, which is what I proposed in the OP.

C# LINQ seems pretty popular and its based on exactly the same ideas.

LINQ has a DSL query syntax (at least in C#).

keean commented 7 years ago

It could still be made more elegant with a fully integrated DSL syntax transpiled seamlessly, which is what I proposed in the OP.

When you go through the design permutations, i think you will find its hard to improve on. For example the semantics of equality are different in SQL from the host language, so we do not want to use the same symbol (although we could due to type-classes). Defining the restrictions and joins as functions on row parameters is more powerful than simply listing the columns (which again we could do). What you see is deliberately based on the mathematical notation for relational algebra. It was probably a bad example to give when you wanted SQL, but when i was creating a DSL for databases, i came up with this, and i think its better than SQL. It's also more powerful than LINQ.

The point is if we can define functions and operators, we can implement any DSL. We can even avoid the limitations on the use of functions and operators by having limited scopes.

I would argue the one thing you don't want to do in a DSL is change the fundamental language syntax, so function application and function definition should look the same everywhere. A domain specific language is not about embedding SQL into any other language, it is about creating a domain specific language within another language. As such the embedding of SQL in Haskell would look different to the embedding if SQL in C, etc.

What i want from a DSL is to preserve the overall flavour of the host language (so i don't get surprised by the syntax of function application suddenly changing) but that allows me to operate efficiently in a particular problem domain. A DSL should make it feel like the library/extension DSL was part of the original language design. So a database query DSL in Haskell should still look and feel like Haskell, but it should be as if relational queries were built into the language from the start, like for example sequential IO was.

keean commented 7 years ago

Bounded nondeterministic Turing-completeness can’t model unbounded non-determinism

Actually it can, you just need a random number source. Just like we can model a quantum computer in a non-quantum computer.

For example i can put two packets into an ordered list sorted by a transport delay, where that delay is chosen randomly (and here we need a proper random number created from entropy sources like the keyboard, disk other IO timings). Then the recipient can pull the messages in 'simulated arrival order'. Using this we can model a complete actor system, complete with non-determinism on a single, non-parallel, classical computer running a simple sequential language like 'C'.

iTradeChips commented 7 years ago

yo shelby i signed up here just to reach you and ask you where are you bringing your BitNet forum? damn they banned even the BitNet (official) account. sorry for the off-topic guys i really need to reach shelby i dont know where to look.

shelby3 commented 7 years ago

Actually it can, you just need a random number source.

Sorry but that is incorrect. Unbounded non-determinism means your program is in an indeterminate state. So to model it, you have to execute code instructions in random order, i.e. no coherent model.

What you mean to say is your program can interact through its I/O with the unbounded nondeterminism of the universe. Interaction is not a model. A program is a partial order. It is impossible to model the total order of the universe because to do so would mean everything is predictable, thus statically known and the past and future collapse into indistinguishable in time order.

And science doesn’t understand well quantum computers. We need to dive into relativistic quantum mechanics, which is not yet widely accepted. I am composing a blog about this and realistic time travel. The speed-of-light is just an imaginary limitation because we choose to retain our perspective of mass and forces, but information (i.e. Shannon entropy) is not limited to such a deterministic bound. Time is only a relativistic metric between mutual referential perspectives. For disjoint perspectives, there is no mutual information of elapsed time. Note maximum entropy (uncertainty aka uniformity of distribution of probability of possible outcomes) would be with maximum disjointness of perspectives, i.e. lack of mutual dependencies, information, and order.

Using this we can model a complete actor system, complete with non-determinism on a single, non-parallel, classical computer running a simple sequential language like 'C'.

Modeling an Actor system doesn’t model the unbounded nondeterminism of the universe, rather the interaction with the (bounded?) entropy of your random number entropy source. It is impossible to model the entropy of the universe, lest if you were such an omniscient God then you could time travel at-will to any place and time that every existed and will ever exist in the universe because the universe is statically known.

keean commented 7 years ago

We can model quantum computers on normal computers. IBM has an API and a quantum simulator that runs on a normal CPU.

See: http://research.ibm.com/ibm-q/qx/ https://github.com/corbett/QuantumComputing

shelby3 commented 7 years ago

IBM has a model of something but it isn’t a model of universal unbounded nondeterminism.

The definition of what is a quantum computer comes into play. Science doesn’t even agree or understand this well yet. As I said, relavistic quantum mechanics is rejected by most mainstream physicists at this time.

We are getting too far off topic.

keean commented 7 years ago

You may as well say we don't understand a magic computer... We can only go with what is proven. In my opinion computers are only useful because they are deterministic. I really don't want to worry about my car not working because the engine-management computer is having a bad day and just split up with the entertainment system which it had been dating for a while...

shelby3 commented 7 years ago

You may as well say we don't understand a magic computer... We can only go with what is proven. In my opinion computers are only useful because they are deterministic.

So we don’t get too far off topic, please note the context of the limitations of static typing in which I brought up the point of unbounded nondeterminism:

The point is what we can not model all of unbounded permutations of composition. Sometimes we will have to punt and accept run-time exceptions (where the program is in an indeterminate state outside of the safety of the static type system). We can not statically type everything.

The prior discussion has caused me to more fully understand the duality of a total order of compile-time typing versus the partial orders of run-time typing.

keean commented 7 years ago

Whether we can statically type everything depends on the type system. Dependent types can type everything as they can depend on values - however to my mind this removes the benefit of static types - which is we can check for correctness at compile time.

shelby3 commented 7 years ago

There is a tension between what you type statically and the degrees-of-freedom to accomplish composition without refactoring. The refactoring and typing complexity can become so onerous and the project so large, that everything grinds to a halt in rigor mortis.

Static typing can help us enforce some invariants but we should use some discretion and no get too ridiculous and try to type everything due to the downside stated.

When static typing is helping document it is very useful. When the types become so complex that no one can even comprehend them, the utility has been lost.