dojo / meta

:rocket: Dojo - meta information for the project.
Other
226 stars 50 forks source link

Should Dojo2 widgets be based on a HyperScript-ish API? #15

Closed pdfernhout closed 8 years ago

pdfernhout commented 8 years ago

This issue is a spinoff of issue #11. @dylans wrote there: "I will say that many of the virtual DOM APIs feel like they're going to turn code into long chains of imperative code make me think there must be a better way, as that style of coding discourages reuse." On "long chains of imperative code", some examples would be nice so I'm sure I know what Dylan mean. The rest of this assumes he means the imperative nature of the HyperScript-ish API. That may not be what he meant, but as I like talking about the benefits of the HyperScript API, it gives me a good excuse to talk about that. :-)

As I said there, based on an enjoyable experience with Mithril, I feel it is almost certain that an imperative HyperScript-like API is both a good choice for a modular interface to whatever vdom implementation is underneath that HyperScript layer and a good choice to support whatever Dojo2/Dijit2 abstractions are built above that HyperScript layer.

Advocating for a HyperScript API layer specifically is moving beyond the vdom pro/con issue towards a Dojo2/Dijit2 widget API, and so I created a new issue for it. Still, as HyperScript relates to the feasibility of using a vdom, it does have some bearing on deciding whether to proceed with a vdom approach. The rest of this tries to make a case that Dojo2 widgets should be written on top of a HyperScript-like API such as used by Mithril and Mercury to define their vdoms.

To clarify what this issue is not intended to be about:

== A motivating example

As an example of such a (hypothetical) HyperScript-like API in use for Dojo2, here is some hypothetical code to display a greeting and a timestamp, and to support updating the timestamp either via a button click or via some hypothetical TimeStampEditor widget (perhaps just a button that puts up a prompt or perhaps something fancier with a date picker and a clock face). Assumed below is that when the onclick function is set, it is automatically wrapped by another function that queues a redraw (similar to what Mithril does). Also, while this example uses mostly lower level HTML definition, I'd expect most Dojo2 applications are going to be composing their GUIs mostly with Dijit2 widgets of various sorts (including with i18n support and a11y support and so on).

import h = require("dijit2-vdom");
import TimeStampEditor = require("TimeStampEditor");

// Define a model as a plain JavaScript object (with no support for dependencies)
var model = {
    lastTimestamp: null
};

function updateTimestamp() {
    model.lastTimeStamp = new Date().toISOString(); 
}

// Create an editor component we will use later
var editor = new TimeStampEditor(model, "lastTimestamp");

// Return a vdom structure reflecting the current model and able to change that model
function render() {
  return h("div#greeting", [
    h("span.salutation","Hello!"),
    h("hr"),
    "Time: ", 
    model.lastTimestamp,
    h("button", {onclick: updateTimeStamp}, "Update time"),
    h("br"),
    "Or edit it here:",
    h("br"),
    h.component(editor)
  ]);
}

updateTimestamp();

// Render a vdom into the DOM and get ready to rerender as needed on redraw requests
h.mount("someExistingDivID", render);

I wanted to use "d" for Dojo as in my example in the previous issue. However, "h" may be more practical given existing tools that can convert HTML to HyperScript using "h".

This example differs from Mithril in that the component construction process is more explicit that the one Mithril uses -- which otherwise involves a behind-the-scenes construction process. The Mithril approach to components has pros and cons. One benefit of the Mithril approach is with dynamic GUIs where components might come and go a lot and need to otherwise be tracked somehow by the developer. In general, choosing a good way to construct and track components is a challenge here, so the best way to do components is an open question to think through-- and the Mithril approach may still be best in practice, or might be built on in some other direction.

== The case for building Dojo2 widgets on a HyperScript-like API

Obviously, there are multiple programming paradigms (including Imperative, Procedural, Declarative, Functional, Object-oriented, Event-driven, and Automata-based). On typical current hardware, all running code is ultimately imperative code defined by machine language stored in sequential bytes in memory. So, the issue is what abstractions we choose put on top of that imperative base for what purposes (including subjective aesthetics).

One advantage of a low-level imperative base of using HyperScript to define vdoms via function calls is that you can then build whatever abstractions you want on top of that in a reasonably efficient way. Otherwise, you may end up trying to map whatever abstraction you want to use today onto, say, someone else's OO or functional model chosen yesterday which has its own set of assumptions, with a result that may be slow and hard to debug due to extra unneeded layers.

Working with a HyperScript-ish API feels very close to just specifying all the HTML DOM nodes yourself. Any webapp programmer is going to have to get comfortable with the DOM sooner or later, so why not sooner? Then, such an informed programmer will probably eventually want to turn towards abstractions over that layer to save time or avoid repetition to hopefully reduce maintenance costs.

In the simple example I provided of using a "h" function to compose a GUI in an imperative way, that's just the base. A single-page webapp I wrote with about forty virtual pages (first using Dijit and then converted to Mithril's vdom approach) uses a declarative approach to define most pages using JSON-ish structures that are then converted via a "builder pattern" to more imperative vdom construction functions (Mithril's API), where Mithril in turn then does more work on rendering to initialize and assemble components and translate them to DOM nodes. However, I could (in theory) have somehow used, say, a constraint resolving engine like Cassowary to go from specifications to Mithril m function calls. Or a trained neural network or whatever. Also, the application-specific widgets themselves were functions that usually created objects which then used the imperative Mithril functions to compose parts of UIs (including sometimes using other widgets instead of simple DOM objects).

So, in practice, I don't feel an imperative-ish vdom API base is limiting. It is even freeing in that you can build whatever you want on top of it. Granted, it may be sometimes useful or even more computationally efficient in some sense to work below that API, an example of which is below. But the imperative API by itself does not prevent you from creating good abstractions for using it towards a goal of reusable code.

Obviously, a good framework is going to provide opinionated tools for using that imperative base effectively and quickly to build data-rich web applications. That's where Dojo2/Dijit2 could shine best in my current opinion -- in innovative leveraging of a HyperScript-ish API above a vdom to support reuse at that higher level of abstraction (while also not preventing lower-level work at the HyperScript level or below when needed for some reason).

For example, "standardWidgets.ts" is some code I wrote for that webapp which creates some standard widgets for that builder like checkboxes, textareas, radio buttons, and so on using Mithril in a way where all those "widgets" pass W3C validation for basic accessibility (with labels and "for" attributes). That code could be clearer and more modular, so I'm not holding it up as something to emulate in that way. The point is that it shows how you can take the low level imperative approach of using a HyperScript-ish API and use that low-level API however you want within JavaScript/TypeScript -- in this case, driven by more abstract GUI specifications. If I someday had time to add ARIA support to improve accessibility beyond standard accessible HTML, then I could make some changes to that code (and elsewhere) to add the right ARIA attributes. Or I could replace most of that code with calls to a library that used a HyperScript-ish API to define ARIA-compliant labelled widgets.

In the case of Mithril, the vdom representation is essentially a relatively straighforward JSON-like object (though including functions). So, you can even bypass those imperative HyperScript-like function calls to some extent if you really want to, because they are just returning nested JavaScript data structures made mostly of basic JavaScript objects. Although then you are linking your code to a specific underlying vdom representation if you do that.

For example, as a kludge I did just that in the standardWigets.ts file mentioned above where it creates sets of checkboxes and radio buttons. I call delete questionLabel[0].attrs["for"]; to remove an unneeded "for" attribute generated into the vdom representation elsewhere by the panel builder system as a default which is appropriate most of the time given otherwise there is always one label to go with each input widget. That kludge unfortunately binds that code tightly to the Mithril vdom representation. It works, but "subtraction" is problematical, especially when it violates some abstraction boundary as it does in this case. So, ideally, I should refactor that entire build process so the "for" attribute is never added in those cases -- maybe someday. My main point is that you can get in there and bypass a HyperScript API and muck about with underlying representations the API constructs if you really want to -- once you have committed to a specific vdom. Or, at least, committed to a set of vdoms that use a common internal representation if you want your code to be usable for more than one vdom. Still, doing so even for just one vdom may be problematical if the vdom representation were to change -- although that is probably unlikely for any specific mature vdom library as a likely breaking change for many users who would have done this sort of thing.

Ultimately, building a complex webapp for a browser requires using JavaScript to create and configure trees of DOM nodes. There has to be some imperative layer in a webapp or supporting libraries that does that work (even if just ad-hoc internal APIs calling DOM functions). Typically, with a vdom approach, this DOM manipulation is isolated to some rendering function that does a diff from some new vdom structure you assembled somehow relative to the last one you supplied to decide how to change the DOM. The new vdom structure can be assembled via HyperScript-ish function calls. Or it can be assembled from interpreting some template or specification either indirectly into HyperScript calls or directly into assembling the vdom representation. The only question might be, do you try to completely hide that vdom construction layer for some reason in Dojo2? I'd advocate that a HyperScript-ish API layer should not be hidden, and instead such an API should be celebrated as an opinionated choice to support adaptation and extension of Dojo2/Dijit2 in unplanned ways -- similar to the approach Mithril, Mercury, and some other vdom systems take regarding that.

Now, it may be tempting to say, Dojo2 could construct vdom structures for Dijit2 widgets somehow better than via an imperative HyperScript API. Maybe it will someday. Dylan is right to question that imperative style and ask if there could be something better. But you generally also have to walk before you can run. Right now, Dojo2 does not have any released vdom Dijit2 widgets. A HyperScript API still provides a solid and proven-successful place to start in making Dojo2 widgets, even if down the road better approaches might be possible or even required for special cases including optimization for vdom construction or diffing.

But having better vdoms or using vdoms in better ways is orthogonal to having good vdom-based applications right now. The HyperScript API (coupled with some Mithril-like support code) provides a way to sidestep all that vdom experimentation, to start building applications now which can use what is out there. Such code can likely benefit from further vdom improvements later with hopefully relatively minor changes to most code if it works at the HyperScript API level.

Reuse is obviously desirable as Dylan mentioned in his comment in the other issue. However, reuse is also difficult given you need multiple examples to figure out how to design reusable stuff. You almost always also need to make assumptions that limit reuse in some direction. You also typically start wrestling with tradeoffs of adding complexity to be general (for related humor, see "Why I Hate Frameworks") versus writing simpler code to be faster and more understandable. You need to make such a tradeoff unless you get lucky with some new inventive idea to avoid a tradeoff, which is rare -- but I feel Leo Horie is onto that sort of invention with Mithril. Compared to what I've seen and heard about many other frameworks, Mithril just feels like an elegant and effective way to make webapps (although not without some warts like related to component initialization complexities).

That process of making such design decisions is of course a deep and perhaps endless discussion (perhaps as a design equivalent of Gödel's incompleteness theorems). There may well be other great ideas out there for vdom-based webapps which are much better than a HyperScript API used in a Mithril-ish way, and which I do not know of (and I welcome hearing about them). But what I do know is that by adopting the proven base of a HyperScript-like API for composing UIs in JavaScript/TypeScript using a vdom approach similar to Mithril, we empower Dojo2 developers to start having those sorts of deep discussions about reuse in the context of working Dijit2-based code, which may prompt further insights into better abstractions from practical experience.

dylans commented 8 years ago

I guess I would argue that HyperScript/JSX/put-selector/other similar approaches are actually a form of templating, just in JavaScript rather than markup. Some frameworks (Aurelia, Angular, even Mayhem) take the approach of moving more logical constructs into markup, and then having a mechanism to efficiently parse a DOM structure into an internal structure that is roughly the same as HyperScript.

It can be argued pretty convincingly that something like HyperScript is more powerful because you essentially have the entire JS language available and you're injecting the capabilities of markup through a virtual DOM syntax, and you are not limited by whatever JS syntax you expose via markup in a templating system. You can also argue that it will always be slightly faster because it's already in JS, though with Mayhem, the build step would always compile the markup template into a JS object (in essence a pre-compilation from a template to something like HyperScript) making the point moot.

It could also be argued, perhaps convincingly or perhaps not, that limiting the freedom within templating allows you to restrict to scenarios that will be faster to optimize and less error prone.

In many ways, it is a similar debate to declarative vs. programmatic instantiation of widgets. I see templating and HyperScript solving the same purpose, just a different way to expose an API. So I think it should be possible to at least expose a subset of whatever HyperScript style API is available via a template with a parser and build optimization step if we feel that is something that would benefit our users.

In the interest of encouraging reuse and making UI components be composable, I do want to discourage practices that mix DOM operations and business logic from occurring in the same block of code. It could be as simple as only allowing real DOM operations in a render method or a template, and only exposing a general way to access data and properties, but not a way to enforce or implement logic within the render code. Meaning, I would not want to have to fully subclass a widget (like you have to in Dijit 1) just to change around the nodes in the DOM, I would want to be able to just provide a custom render method in a manner similar to what we can do with dgrid.

In response to "Any webapp programmer is going to have to get comfortable with the DOM sooner or later, so why not sooner?", I would argue that the DOM is a pretty terrible API to work with overall, and not something I would want most people to deal with everyday. Virtual DOM libraries exist in part because DOM operations are slow, and difficult to streamline, so you instead work against an intermediary API that handles things like batch updates and 1-way data binding. For example, something like fastdom is simply focused on preventing read/write thrashing which causes too many re-rendering calls. Batch DOM operations is also part of dojo-dom/schedule.

Virtual DOM libraries probably don't go far enough away from real DOM semantics today, but I think this is also back to the point of why there are so many different approaches and attempts and finding the right general purpose rendering API. A couple of obvious use cases popularized by React include React Native and gl-react, where you want to use JavaScript to create non-DOM UIs by ideally being able to reuse much of the same logic code for your UI. Native UIs was a similar thing we had thought through with Mayhem, but we had other complexities involved that got in our way of making that an ideal development experience.

pdfernhout commented 8 years ago

@dylans I don't feel it is meaningful to lump HyperScript in with templates. I've only presented a simple example which admittedly looks a lot like a template, but in practice, you can create a GUI via HyperScript by weaving through many functions and classes and so on. So, a simple example may look similar to a template, but that's where the similarity ends, as you outline with having access to the power of JavaScript.

Also, being able to refactor HyperScript "templates" in TypeScript using an IDE is a big win.

pdfernhout commented 8 years ago

@dylans Sure, the DOM sucks. Searching on that phrase turns up this recent presentation by a core React committer, for example: "The DOM Sucks, and It Should Become a Second-Class Citizen". The summary there:

React has always been about the Virtual DOM. A nice way to render HTML (and some of SVG and maybe some Web Components). Although there's also react-art, react-three, react-canvas, react-curses... Oh, and react-native! Even if you bottom out at HTML, most of what React does really well is rendering to OTHER React components. Meanwhile most projects still try to retrofit our needs into HTML and CSS primitives. I'll talk about why the DOM is flawed and how it is becoming a second-class citizen in the land of React apps.

Dojo2 can no doubt do the same with Dijit2 widgets as far as helping programmers get far away from the DOM. And good Dojo2 widgets will help separate business logic from presentation logic. That is what I expect would be the development path.

== More details

It's sometime said "you should not try to legislate morality". Of course, in practice, a lot of laws are about morality, so it's a complex topic. But regarding separating business logic and presentation logic, I don't see how going to extra effort to try to legislate that in some specific way with templates is going to be worth it (even as templates may have other value). Such an effort will likely just introduce extra complexity and end up failing anyway when people think of clever ways to work around it. It also still won't solve all the other business logic vs. presentation logic choices elsewhere in the application.

If some programmers are creating messes, there is only a limited extent to which tools or libraries can help without otherwise being so restrictive they make development so much harder they are abandoned. For example, a "clueless" programmer intent on making a mess will just import some other library they prefer for making messes. "Code review won't let them get away with that!", you might object. But if you're assuming code review, then you've already assumed reasonably competent programmers (or, at least, a reasonably competent organization surrounding the programmers).

It is a good idea however to make doing the right thing easy. :-) It is also a good thing to make code that skillful programmers will admire and recommend. I feel libraries written with that end in mind are a better investment of effort (and more fun) in a flexible language like JavaScript than trying to make doing the wrong thing impossible. One of Murphy's technology laws is: "Build a system that even a fool can use and only a fool will want to use it."

Whatever we can say about the DOM, its a fact of life as a webapp developer, and millions of people program in it every day. However, that said, after a Dojo2 system exists that lets knowledgeable developers make great stuff quickly, then it is not unreasonable to say, how can we put training wheels on part of it? That is indeed kind of what I did with the specification form I developed as above. But even then, there is the risk of helping developers become "Expert Beginners" like @kitsonk warned about in his presentation I watched yesterday on "Defend against the Caveman Coder".

Maybe I should not admit that some of that feeling about having to learn the DOM anyway was from wrestling with Dijit? :-) I thought Dijit would save me time at first relative to learning the DOM and CSS well first (other than a casual acquaintance), but I found myself having to learn the DOM and CSS in parallel with leaning Dijit anyway because, like most abstractions, Dijit "leaked". If I had come to Dijit with the expectation it was going to help me better use DOM and CSS I already knew (and perhaps disliked) in a better way, that would have been a different experience. But that is not the way Dijit is presented. To be fair to Dijit though, I was trying to use Dijit in a dynamic way via a builder pattern, so I was not using Dijit in the way intended, which led to various unusual issues not covered by the standard documentation. And because most of the DIjit examples in the documentation are about how to use Dijit via HTML templates and not via programmatic construction as I was doing, that was another stumbling block for my specific use case.

Mithril provides that sort of improved experience IMHO, since it is unabashedly starting at the DOM and CSS level. Except, of course, as a vdom, with Mithril you are never adding, removing, or iterating over DOM nodes directly, so that part of the DOM complexity is hidden and irrelevant as a library user. So, by saying Mithril starts at the DOM, I'm talking more about using HTML tags, attributes, and callback events, and using CSS for styling -- not any sort of jQuery-like DOM manipulation. All that basic DOM-related work is well documented and learnable from many good sources and immediately applicable to a Mithril-based webapp. To be fair though, I only started using Mithril after learning a lot more about the DOM from working with Dijit as I tried to debug Dijits or work around their perceived limitations sometimes. If I had tried to use Mithril without understanding the DOM and CSS, it would have been much harder. Likewise, had I already know DOM and CSS extremely well before using Dijit, my experience there might have been much easier.

Even if the DOM sucks, millions of programmers know how to work with it (at least at a basic level). If Dojo2 is pitched at data-rich webapps (like internal webapps in big organizations), it seems fair to assume developers will already know something about the DOM and CSS and know something about separating business logic from presentation logic (or should). Ideally, such developers are going to be already motivated to make and use Dojo2 widgets to isolate themselves from the DOM anyway once they reach a certain level of understanding.

I did not want this issue to be about templates because using HyperScript does not preclude supporting templates, but perhaps you could present a few other specific use cases (in terms of assumptions about developer experiences and specific tasks) where business-logic-proof presentation templates are a big win as opposed to just using JavaScript with HyperScript to code the GUI part? Is an example going to be ten novice programmers implementing a complex GUI for a big company with no mentoring? How realistic is that? Would such a group of novices likely even consider using Dojo2 in the first place as opposed to just using some more popular but less capable library -- probably one written in PHP? :-)

Before encountering the Dojo Toolkit, my understanding of "dojo" was from Aikido, as a training hall to help people seeking a certain sort of enlightenment to achieve it in a certain sort of way through training as a collaborative effort with others. It seems to me that Dojo1 has fulfilled that role for many JavaScript developers (including me, as I learned a lot from it), and Dojo2 will ideally fulfill that role as well (but with a vdom focus for widgets). So, tools in the Dojo2 toolkit would ideally be seen as steps along the way to such a programming enlightenment, as would any framework aspects as opinionated advice about data-rich webapps. And if GUI templates can serve as a step along that way somehow, then that is great -- but they should be framed in that context IMHO. But we probably should still assume the programmer is on the path to enlightenment of a certain sort, and that is why they or their team picked Dojo. :-)

pdfernhout commented 8 years ago

tl;dr This comment provide an example of focusing on a higher level issue of supporting domain experts working with programmers, as opposed to keeping novice programmers from writing bad code,

@dylans As a clarification and example, given I have been arguing against GUI templates, here are specification files my wife (a domain expert on narrative) mostly wrote that drive much of the behavior of that forty page webapp. I created the format for them (they were first structured text), added some JavaScript to them in a few cases for conditional logic, and built the interpreter to use them to generate pages (including with complex widgets), and then migrated that all through about three or four different formats and other major refactorings. You need to drill down into the directories to see the actual *.ts TypeScript files. Those specifications define much of the "business logic" and also "data storage" and "reporting" layer of the webapp I've referred to before. They were intended to be created and maintained by someone without much JavaScript knowledge (with the assistance of someone who does). There is very little HTML in those specifications, although there is application-specific text that is displayed to the user. So, despite arguing against GUI-related templates, I remain tremendously sympathetic to your point about separating business logic from presentation logic (as viewed broadly, including from a Domain-Specific Language perspective). I just don't feel restricting those who want to do it wrong is going to work as well as empowering those who want to do it right -- given the nature of JavaScript, the diversity of the web, and the diversity of possible approaches to different tasks.

For that webapp, I was trying something new (and perhaps overly ambitious) to try to unify all the application specifications (inspired in part by dmodel properties and schemas). The unifying idea seemed feasible at the start given the initially intended structure of the application. However, "complexities" quickly got in the way, and I started regretting a bit not just doing it the usual way of GUI code here, model code there, storage code over there, and reporting code way over there. :-)

I was also trying to avoid a correspondence error risk given JavaScript does not have the equivalent of Java's enum enforced at the compiler level. So, it is easy to accidentally end up with code in those four areas (gui, model, storage, reporting) which does not use consistent IDs. TypeScript can help with that correspondence error risk by its type checking, but I did not start out in TypeScript,

I thought it might be a big win to have all the specification-related code in one place, and there was great value in that. But what I also discovered is ideally I would want to look at the application specifications in multiple ways and these specifications were organized conceptually in just one way (by page, a key idea in the webapp which was organized as a sort of notebook). Ideally, rather than JSON-ish specifications, I wanted those requirements to, say, be in a database of some sort I could query as a developer and sort and report on in different ways. Although I could in theory write a tool to do that which imported the specifications, and I actually did that a few times mostly for transformative ends as I upgraded the surrounding infrastructure, but not specifically for reporting in a variety of ways. Also, as a legacy of trying to reduce total file count before eventually doing a packaging step, what I really wanted was one fine-grained file per specification aspect (like a displayed question and/or stored data item), but I ended up with one file per page with multiple items in them.

The specifications started out as structured text (all in one big file), then JavaScript, then JSON, and then TypeScript. There were various enhancements along the way, and even a brief time of storing some related popup help text in an HTML file which was dynamically loaded and parsed. There remains some legacy issues from that migration, where if I had started out in TypeScript the specification format might have been slightly different and better (by using various TypeScript features).

What I did there was push as much of the "business logic" and "database structure" and such into data structures that may seem to look a bit like GUI templates. But in this case, these were not GUI templates but are essentially human-readable and machine-readable requirements documents (called "specifications") which usually happen to have some GUI-related text in them among other things.

This architecture reflected separations of human roles more than even separation of concerns. The approach assumes two roles: a domain expert who has a very limited area of the codebase she can edit using essentially a JSON-ish Domain-Specific Language (DSL) to define those requirements of what information needs to be to collect or presented, and a software developer who maintains the machinery that translates that DSL into a usable webapp (including with complex application-specific widgets, and some minor tweaks to the DSL). Of course, if a domain expert does not have any programming skills and so can't edit the codebase effectively, then the programmer could take less formal specifications and translate them to those DSL specifications. That is what my wife and I did when making a similar system once for an insurance-related application to do telephone interviews when working from specifications created by insurance underwriters and translating them into essentially a spreadsheet that drove much of the applications's behavior.

There are no GUI templates in that webapp in the sense normally meant -- even if the specifications do include text that shows up in the GUI and one might otherwise see the requirements as "templates" of a sort. The driving issue in this case was, more than anything, how can a domain expert work productively with a programmer on a complex task (encoding a participatory narrative inquiry process defined in a 700 page textbook into a webapp).

Given that emphasis on the separation of human roles and responsibilities via a DSL and an interpreter, a framework would just get in the way if it emphasized keeping anyone from putting "business logic" in the GUI by forcing them to use some HTML-ish templating syntax instead of supporting creating widgets that were configured via DSL specification. So, there is a specific example of where trying to force a programmer to do GUis a certain way to protect them from themselves would likely be counterproductive. Obviously, there is no problem in a toolkit which has a tool for using templates, if it could just be ignored in this case.

It's still not clear to me that I really made the application more maintainable than if those specifications were just coded directly (by the programmer, not the domain expert) in an imperative style into webapp pages and other data storage and reporting operations. That is because the current approach increases the general level of skill required to maintain the webapp outside of the requirements DSL part. A programmer now has to understand the machinery that translates those DSL specifications into the webapp's presentation and data storage and reporting behavior. That means a longer learning curve to get into understanding the application, and also requires a programmer comfortable with abstraction (and a lot of people who call themselves programmers are not comfortable with abstraction and never will be, as it takes a more mathematical mindset, even if such programmers may be able to contribute greatly in other areas). Abstraction (or indirection) can be useful, but it comes with a cost. As is said, there is no problem in computer science that another layer of indirection can't solve, except for the problem of too many layers of indirection. :-)

The presentation layer was first using Dijit-based webapp pages (far from the DOM) and then later used Mithril-based webapp pages (close to the DOM). Being specification-driven made that switch much easier than had I coded the GUI directly in an imperative style, true. And it would make another switch, like to Dojo2/Dijit2 easier as well. :-)

But all that machinery to interpret the requirements DSL is still harder to understand than just understand imperative code because of the extra layers of abstraction. That machinery also went through several refactorings, which all took time, including from debugging more complex operations. Debugging was definitely harder than in a more straightforward model.GUI/storage/reporting separation without a DSL-like-specifications and an interpreter/builder pattern. As Brian Kernighan said, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

That approach still worked OK in the end for the situation where my wife as the domain expert did not know JavaScript (but is otherwise a reasonably good programmer) and I did know JavaScript but not much about the problem domain (and was comfortable with a lot of abstraction) -- and where I was also experimenting with storing data in message-based triples (an interest of mine). It's not clear to me a non-programmer could realistically maintain even just the more limited of those specification "templates" though. I've known of non-programmers making the most basic errors in editing files like accidentally deleting big sections and not noticing. And so I'm not sure if I'd recommend that approach to others as-is.

What I do know is that the machinery to interpret the specifications got more and more complex as the application requirements both grew and became better understood over time. Had I just written the GUI parts directly using the HyperScript API, without a builder pattern driven by specifications, the GUI code would be much easier to understand and tweak (although much harder to switch to other widget systems).

What I also know is that Mithril's version of HyperScript did not get in the way of what I wanted to do. Maybe one can question if the specifications-driven approach was a good idea in this case, but HyperScript made the GUI component part of it easily approachable in just the same way it would have supported a more conventional approach. However, what I would have liked, and did not have, is a set of ARIA-compliant widgets in Mithril which ideally also were at a higher level than DOM widgets. I had to build my own higher-level widgets to do things like have labelled textareas and checkboxes. Those are the sort of higher-level components a toolkit would ideally provide -- and even Dijit does not provide them out of the box,

Often systems emerge out of unique constellations of people, pressures, and concerns. Of course, it might be nice to have more general tools to do what that webapp does when it is appropriate. But I don't think that is a "one size fits all" solution, even if it can be useful sometimes. So, I remain skeptical of frameworks (even adhoc ones I have written myself), even as I remain optimistic about toolkits.

Perhaps my overall point here is that the focus of Dojo2 should be more on the higher sorts of issues like how does a toolkit or framework support a domain expert working with a programmer -- rather than how does a framework keep a novice programmer from writing bad code, Sure, someone can mess things up using the HyperScript API like by mixing business logic into GUI code not constrained by business rules. But one can mess things up with plain JavaScript too.

Ideally, Dojo2 should focus more on empowering programmers and making the right thing easy (with some other concerns being "training issues" or "code review issues"). I feel full access to the HyperScript API at the widget building level should be part of that empowerment -- even if a good programmer is likely to be spending most of his or her time working at a higher level than that.

pdfernhout commented 8 years ago

@dylans One other point on business logic. Having worked in the insurance industry, where the wording on questions to be asked of insurance applicants had to be formally approved by each state, and legally has to be asked just that way (typos and all), I am biased towards feeling some aspects of "presentation" are more "business" logic than some other software developers might be. :-)

pdfernhout commented 8 years ago

@dylans To build on the previous point about working in a regulated industry where a certain presentation is a legally-mandated business requirement, that example shows how the deeper issue is not presentation logic vs. business logic, but instead a more general "separation of concerns" as part of a SOLID design process.

In the case of HyperScript, I feel it is a good choice because it contributes to the separation of concerns. The DOM needs to be dealt with. The HyperScript API (and a vdom) does that well (along with some other Mithril ideas). Other layers can then interact with that HyperScript layer as they see fit. Sure, one can push business logic into DOM callbacks via the HyperScript API, but that is just presumably poor programming practice that should ideally be flagged at code review time.

If an organization expects programmers use a certain template-based approach for some reason, with templates translated to either HyperScript or vdom structures, then that is still possible alongside a HyperScript API. However, I can wonder if the same sort of hypothetical organization that can't get its programmers to follow a best practice of designing for separation of concerns (or roles) is really going to be able to reliably get programmers to follow a guideline of using some template system (or even decide to issue such a recommendation). So, I feel the use case for a certain type of restrictive template is narrow, and thus I feel supporting that use case should not be a priority at first relative to empowering programmers who want to write well-designed perfomant webapps.

Of course, I may be wrong. :-) But that is how I feel about it at the moment. Admittedly, it requires less thinking by programmers to always use some templating system than to remember and act on a design principle like separation of concerns or even follow a general rule like "Don't put business logic in the GUI callbacks!". Some specific examples might help convince me otherwise.

As Lawrence Lessig talks about in "Code 2.0", there are at least four ways to shape human behavior -- rules, norms, incentives, and architecture. So, it is a fair question to ask, what programming behaviors are we trying to shape with Dojo2, and which of these four possibilities is the best way to influence each behavior? What I'm trying to argue here is essentially that separation of concerns at the DOM-interfacing level should be more due to a "norm" or a "rule" than due to "architecture", because the cost of an architecture at that level is too high in restricting unexpected designs and in reducing performance. (As an example, MIthril is much faster that React in some scenarios in part because it has less overhead for basic vdom operations by needing less component layers for basic things.) And, by contrast, I'd suggest solutions involving "architecture" might serve well at a higher level like to answer, in an opinionated way as with the example above, how could a domain expert work well with a programmer when making a webapp involving a lot of data entry? Or, at a intermediate level, architecture could also help with a question like, how does a programmer quickly make an a11y interface with labelled data input fields?

dylans commented 8 years ago

I think we agree on most of this, but I'll be brief in the interest of time.

A framework is successful in its approach to architecture if it makes the right way be straightforward/simple (not easy) and direct, so that it's not circumvented, and flexible enough that people can workaround the preferred approach when necessary. Achieving that combo is the challenge. Simple + simple + simple === simple, whereas easy + easy + easy === complex.

You want to find the right balance of clarity, good architecture, separate concerns, that is efficiently productive, performs well, and does not lead to slow, difficult to maintain applications by default. We cannot prevent bad engineers from all mistakes they will make, but we also don't need to start them off with a tightrope walking over a pit of fire either.

Dojo 1.x succeeds in some of these areas, but over time some of its benefits relative to its peers have eroded. Dojo 2.x aims to do better.

pdfernhout commented 8 years ago

@dylans I like the point on simplicity, because simpler things tend to be more reliable and maintainable. I agree that sometimes trying to make things "easy" can lead to complexities. As an example, in the data collection DSL example above, the supporting machinery for that DSL got complex. Trying to do some specific GUI things outside what the DSL offered as "easy" also was more complex and thus harder than if the system been just all imperative code rather than driven by interpreting specifications.

I'm willing to entertain the notion that using a HyperScript API to work near the DOM level could be seen as the programming equivalent of "tightrope walking over a pit of fire". :-) Even if I have found working at that level fun and productive with Mithril -- but then some people probably like being high-wire walkers in circuses. :-) So, let's assume that a HyperScript API was risky to use (like the equivalent of programming in Assembly Language instead of Java), and yet we wanted to support programmers doing that anyway when needed (like to make specific complex components that perform well). Then one way to make progress on Dojo2 is to build a suite of Dojo2 widgets so straightforward/simple to use that such tightrope walking would be kept to a minimum.

For example, in the webapp mentioned previously, almost all the HyperScript API use is within the context of building components anyway. If I had available a good set of a11y and i18n components like a labelled set of radio buttons and a labelled text area, I might not have had to use the HyperScript API much at all (beyond perhaps composing such widgets into linear sequences, which could perhaps also be done via a helper function). Although, I can almost guarantee there would still be some futzing with the CSS for colors, font sizes, and margins as an "abstraction leak". :-)

Ideally, developers could then compose many new Dojo2 widgets by building on existing Dojo2 widgets (rather than have to get out the tightrope). The context that surrounds those widgets would, ideally, also make separating concerns straightforward/simple (but that is a separate issue -- here is perhaps one piece of that puzzle as a value path resolver I wrote to work with specification-defined input fields).

So, I feel the way to keep people away from the pit of DOM (given a HyperScript API layer) is to make really great composable Dojo2 widgets -- with the lowest level widgets built on top of a HyperScript-ish API like Mithril offers. :-) The Dijit1 widget set itself has been a big success and a reason many people chose Dojo1, especially with the a11y focus. Ideally, equivalents for all the existing Dijits would be created for Dojo2, with an eye to making an upgrade from Dojo1 to Dojo2 straightforward/simple for an existing webapp (even if perhaps not "easy" in terms of requiring some not-insignificant amount of programming work to change paradigms from dependencies to a vdom-based Flux-ish approach). I know such a migration is doable though, because I've already done it in a more limited way with a move from Dijit1 to Mithril, where I tried to maintain some compatibility with the DIjit1 approach at first.

While I'm certainly open to suggestions, I know from first-hand experience that just picking Mithril as a vdom and building Dijit-compatible widgets on top of it with the "m" API is going to work OK and provide a platform that could build any sort of webapp. The only proviso is as noted in issue #11 that dgrid is going to be the biggest stress test for any solution, and I have not created a complex table in Mithril much beyond a custom grid with typically less than 100 rows. So, dgrid is going to be a sticking point, although, as noted in that issue, dgrid could always be used as-is and wrapped in Mithril.

For example, here is some code I wrote for the essence of a checkbox in Mithril:

    parts = [
         m("input[type=checkbox]", {id: getIdForText(fieldID), checked: value, onchange: function(event) { change(null, event.target.checked); }}),
         m("br")
     ];
     ...
     return m("div", {key: fieldID, "class": classString}, parts);

Here is some code for the essence of a select:

    var selectOptions = [];
    var defaultOptions = {
        name: '',
        value: '',
        selected: undefined
    };
    if (!value) defaultOptions.selected = 'selected';
    selectOptions.push(m("option", defaultOptions, '-- select --'));
    selectOptions = selectOptions.concat(
        fieldSpecification.valueOptions.map(function (option, index) {
            var optionName;
            var optionValue;
            if (typeof option === "string") {
                optionName = option;
                optionValue = option;
            } else {
                optionName = option.name;
                optionValue = option.value;                    
            }
            var optionOptions = {value: optionValue, selected: undefined};
            // console.log("optionValue, value", optionValue, value, optionValue === value);
            if (optionValue === value) optionOptions.selected = 'selected';
            return m("option", optionOptions, optionName);
        })
    );

    parts = [
        m("select", standardValueOptions, selectOptions),
        m("br")
    ];
   ...
   return m("div", {key: fieldID, "class": classString}, parts);

So, I feel a basic set of Dojo2 widgets on top of the Mithril HyperScript-ish API is doable in a relatively small amount of time. Now, would the results be the final Dojo2 widget set? No. That would just be a starting point for playing around with and commenting on and improving and refactoring further. Changes would be needed to improve compatibility with DIjit1 as far as making a migration more straightforward/simple, adding support for Dojo stores (like with the select options), and so on. One might even have more than one set of widgets -- the Dijit-upgrade set and maybe some new sets optimized for some other use cases. Such a Dojo2 pre-alpha1 widget set would just be a first step on a path. That path might even eventually include ditching Mithril for some new vdom, perhaps one that may not even exist yet,

I feel having a specific implementation of Dijit2 widgets to critique, even a bad one, is going to lead to more insights at this point than trying to design this purely abstractly. Some things are just more obvious when looking at running code. Still, that also requires a willingness to sometimes to throw code away and "burn the disk packs". The polite term now for doing that is "refactoring". :-)

BTW, maybe I've just gotten too used to the smell of brimstone :-) but I still don't see the above code examples as being that terribly dangerous looking as far as the DOM part. Of course, one could easily make valid critiques of the JavaScript code itself from a toolkit point of view like the select list construction code with an untranslated default select option, assumption about no actual String objects in the options, etc.. But I look at the DOM part constructed with the HyperScript API and it just makes sense and looks easily maintainable (to me at least). That HyperScript API part seems to be the least of the problems in that code.

pdfernhout commented 8 years ago

@dylans has pointed out that there is too much text here for most people to read quickly, so I'm closing this issue and plan to create a new one with a short summary of the key issues here that is more inviting to discussion.