hirrolot / hirrolot.github.io

My blog
https://hirrolot.github.io/
8 stars 1 forks source link

posts/why-static-languages-suffer-from-complexity #4

Closed utterances-bot closed 2 years ago

utterances-bot commented 2 years ago

Why Static Languages Suffer From Complexity

https://hirrolot.github.io/posts/why-static-languages-suffer-from-complexity

mz0 commented 2 years ago

Fine food for thought, thanks a lot! I'd like to point only to EWD473 being clearly dated 13th February 1975 at the end of that note.

inv2004 commented 2 years ago

Nim's solution:

import system/io, strutils

proc unwrapVarArgs(a: varargs[string, `$`]): seq[string] =
  result.add a

proc printf(p: static string, a: static openArray[string]) =
  when count(p, '*') != a.len:
    {.fatal:"lens are not eq".}
  var i = 0
  for c in p:
    if c == '*':
      write stdout, $a[i]
      inc i
    else:
      write stdout, $c

printf "hello * world *", unwrapVarArgs(24, "test")
emdash commented 2 years ago

I recently came across: https://ajrouvoet.github.io/files/thesis.pdf

Not had time to read all of it (250+ pages!), but it claims to reduce the mental burden of working within a dependent type system.

emdash commented 2 years ago

It's a good summary, but your conclusion is a bit weak: "Programming languages should be re-thought."

I don't think the subjective pain of "biformity" is nearly as bad the subjective pain of dynamic typing. So I have to specify the program once at the value level and once at the type level. Big deal. Way better than having to trace type errors through a large program.

I'm skeptical about the move towards async in Rust / JS / Python myself, but it does solve a real problem with callback-oriented APIs in event-driven frameworks. I see it as a compromise between the efficiency of an FSM and the heavy-handed expressiveness of full co-routines or continuations.

Another language to look at is Koka, which seems like it directly addresses the accidental complexity introduced by systems languages re: memory management, and it has a principled effect system that seems practical enough for real-world use and not a research curiosity.

ghost commented 2 years ago

I can tell you didn't care to look into Zig with any diligence. Everything after "Zig is a systems language" is about unrelated languages, and almost none of it applies to Zig. Not all systems languages are the same, you know. Otherwise there would only be one.

wongjiahau commented 2 years ago

This is a good read. The examples on printf made me questioned this: Is printf-like function a necessity? Is simple string concatenation worse? Of course the latter has “some”performance overhead, but is it more expensive than the cost of maintaining a custom compiler (preprocessor/macro/comptime/dependent type etc)?

Based on my little experience performing type-level shenanigans in TypeScript in the name of “compile time validation”, I eventually find that it is essentially a technically debt dressed like a solution. Though it does what it should, literally nobody in my team (sometimes including myself) can understand the cryptic error thrown by this type-level monster.

To generalise, is metaprogramming worth it? Doesn’t simpler solution exists?

gree7 commented 2 years ago

Awesome write-up! Thanks a lot for it. Always a lot of fun to learn new languages... in this case for me Idris and Zig.

louy2 commented 2 years ago

"Programming languages ought to be rethought." This article really makes me feel nostalgic. I had quite the similar urges to unify runtime and type-level code when I first learned programming, read about the PL scene, and started to get into contact with the myriad of features. Every new language, every language extension, every DSL, arguably are products of rethinking programming languages to some extent into some directions. Cheers to that passion to explore and create.

The article does a great job surveying type-level programming in Rust and others, but started to lose focus towards the end. Let's bring the thought one step further. Why not just bring the static and dynamic parts on par? What's preventing them from being on par? The article wanders around criticizing all the coping mechanisms people have developed, but why do people rather use the coping mechanisms instead of attacking the core problem?

Because the complexities come from the striving for completeness, which means "if it doesn't compile, it doesn't work," while keeping soundness intact, which means "if it compiles, it works." For many problems we don't know how to write the complex details of the model ergonomically enough, so we cope and end up with "if it compiles, it probably works."

To illustrate this point, I present the example of Prolog. This year marks the 50th anniversary of Prolog since its first version. Prolog doesn't have a division between type-level and value-level. Prolog started as interpreted, and can even be interactive, so it is dynamic. Or a Prolog program can be compiled entirely to machine code, too. However, what enables that lack of division is the sound and complete inference system of Prolog. Therefore although the semantics of Prolog is simple, using it to do complex things can be a lot less intuitive than alternatives. Case in point, chalk, the new work-in-progress trait system of Rust, is "based on Prolog-ish logic rules."

Bugs notwithstanding, people generally want soundness for the static part; not as much for the dynamic part. Sure, the static part rarely supports induction, but how often does the dynamic part support unification?

For those striving for completeness, "Dependent types alone are just too low-level." is a point many are painfully aware. It is no coincidence that the major proof assistants come with sublanguages for proof construction and automation. Isabelle has Isar; Coq and Lean have their proof mode and tactics; Idris has Elaborator. But these tools rarely reduce the barrier of entry. The borrow checker of Rust is essentially an automatic proof finder for memory access theorems, and to the extent it has enabled so many people to use affine types I find it amazing. 



Still, the limits of this proof finder are acknowledged by Rust contributors and users alike, from which unsafe and the macro system are often regarded escape hatches. But I have come to realize that type / model checking, compile time computation, and syntax innovation are three problems warranting their own solutions. For type / model checking, I hope there will be better support for non-automatic linear logic proof writing, along the line of GhostCell and typestates; for compile time computation, I want to see how far the capability of const can be pushed; for syntax innovation, I agree operations on an abstract syntax tree would be helpful.

louy2 commented 2 years ago

Also let me drop this link to a project by another young genius: Amulet: A simple, functional programming language in the ML tradition

fuergaosi233 commented 2 years ago

Typescirpt seems to be a good solution, with both the flexibility of a dynamic language and a powerful type system.

matthewdowney commented 2 years ago

Thanks for this essay. Thoughts on Clojure?

qouteall commented 2 years ago

For languags that does not have a flexible type system such as Java, IDE can do the job of advanced error checking. In IntellijIDEA, if I wrongly use printf, for example giving one more argument String.format("%d", 1, 1), the IDE will give a warning to me. Dependent type is not the only solution of advanced error checking.

I think IDE completion is better than macros because you can see all the code and debug easily while macros usually give cryptic error messages and are hard to debug.

Maybe we need a more unified and easy-to-use IDE extension system that allows a library to bundle with an IDE extension that do its custom completion and error checking.