Closed marianoguerra closed 4 years ago
💬 François-René Rideau
I'm currently working on persisting continuations / threads. Does anyone have hints to previous work on the topic?
💬 Drewverlee
To what extent do our natural biases leek into the models (data structures, algorithms) we use. E.g is it possible that a tree data structure is more appealing to both users and developers because it mirrors a hierarchy (everything has a parent/cause) vs a graph which has a loop (which came first the chicken or the egg) which is considered less intuitive.
It seems a relevant design choice to consider not only the universal truthiness of something but also the cognitive load it takes to use it.
📝 The Singing Neanderthals via Eric Gade
Earlier today I was thinking about Mithen’s “The Singing Neanderthals”, and how deeply embedded in the human brain musical constructs are.
Does anyone know of programming systems (particularly end-user programming systems) that use music as the programming interface, or perhaps a significant part of it?
Does anyone have tools / processes to recommend for rapidly iterating the design of a language? I’m looking for strategies to produce a document that captures the design, evolution, and potential variations of a language interface separate from implementation concerns. Good examples of this would also be very much appreciated!
What do you think of programming by example and programming by demonstration? what's the best implementation/resource/talk you have seen?
[September 4th, 2020 9:28 PM] jack529: Nice! Around 30 years ago there was a movement called Programming by Example (PBE) that tried to find a generalization of this pattern for a variety of programming tasks. I'd love to see people revisit that work with modern compute power and neural network architectures. (An early 90s history of the word can be found Watch What I Do: Programming by Demonstration, a sequel by a different researcher Your Wish is My Command: Giving Users the Power to Instruct their Software. Many familiar names contributed essays: Larry Tesler, Brad Myers, &c.)
📝 A dream of an ultimate OS via Jack Rusher
Oleg dreams himself to something like Plan9:
This paper is an attempt to imagine what an OS would look like and how it would work if looking for a word 'foo' in > something> and deleting/closing/stopping this something, -- be it a paragraph of text, a network connection, a subscribed newsgroup, a process -- would all require roughly the same sequence of mouse clicks or keystrokes, and would be understood and interpreted in the same spirit by the operating system.
📝 How should we build the APIs of tomorrow? via Aria Minaei
There is a thread on HN
I left a comment with a few ideas:
This is from my limited understanding of some of the ideas in FOC. What did I miss/misinterpret?
0: Braid: Adding Synchronization to HTTP
🐦 historical user interfaces that aren't much talked about via Jack Rusher
I've been thinking for awhile of writing something about the historical user interfaces that aren't much talked about (i.e. not the Alto -> Mac -> Windows lineage). In this instance, I decided to do a "tweet storm" instead of a blog post to see if that would be a more effective way to get the ideas into the world. As this is probably of interest to the community, I thought I should link to it here as well. The entrance to this particular Garden of Forking Paths:
🐦 ⸘Jack Rusher‽: I've been thinking about roads not taken lately, wondering whether we should revisit some of them... https://pbs.twimg.com/media/Eg0l3uGXsAAD_Aa.png
🐦 "No code" sounds like... via Christopher Galtenberg
To the point https://twitter.com/garybernhardt/status/1301190812357726208
🐦 Gary Bernhardt: "No code" sounds like "we will represent complex logical operations without a symbolic representation." I expect it to result things that are, in fact, symbolic representations (so code) but awkwardly obscured enough to not look like code, just like the last n attempts at this.
🐦 Predicting the future of technology is easy via Roben Kleene
I came across this quote on Twitter: "In short, predicting the future of technology is easy: just look at what's already failed, and assume that somebody will eventually make a version that actually works." (attributed to a "departing Bernstein analyst"). Any thoughts? Is something already having been tried and failed an indication that it's a bad idea, a good idea, or neither?
🎥 all-the-widgets via Jack Rusher
A 1990 video by Brad Myers offers two hours (!) of screen recordings of the user interface affordances of various historical GUIs, including PARC's Bravo (1974), various Smalltalks, Interlisp-D, Star (1981), Cedar (1982) Symbolics, NeXT, &c.
Persisting continuations, programming by example/demonstration, building the APIs of tomorrow, historical user interfaces and more
Two Minute Week
🎥 2020-09-02: tests for printing to screen : Kartik Agaram via Kartik Agaram
🧵 conversation
I need closure because I've gotten excited recently about dropping my dependency on the Linux kernel, and a way to provide graphics without an OS.
Using BIOS.
waves hands. No, wait, come back. Check this out: http://arclanguage.org/item?id=21532. While using *VGA primitives is never going to give a large screen resolution or 60fps games, I think it might give us some nice new capabilities for very little code.
I'm starting to think of the BIOS as "a standard instruction set for all the other hardware besides the processor." That seems fruitful. (https://futureofcoding.slack.com/archives/C5U3SEW6A/p1597899212000500)
The real problem down this road is input. A keyboard will be fine. A mouse seems iffy but hopefully doable. But pressure-sensitive multi-touch, fuggedaboudit. So no minority-report-style FoC demos in Mu's future. Oh well. I'm not even sure what a reasonable API for multi-touch looks like, so I'm not going to worry about it for now.