karimayachi / karimayachi.github.io

MIT License
2 stars 0 forks source link

Binding Engine #6

Open karimayachi opened 4 years ago

karimayachi commented 4 years ago

Conditional binding

As discussed in issue #3 we would probably have simple non-nested boolean && and || and simple non-nested ternary operations.

No eval-ed JavaScript as it would break CSP requirements and be generally unsafe.

I was contemplating JSX. It shouldn't be a requirement (as we're not a Framework and don't impose these techniques 😉), but optionally it can provide more active code and conditionals in markup without breaking CSP. Maaaaybe?

Syntax

I was thinking about the syntax for the view-binding. There are so many styles to choose from (data-* attributes, handlebars, Lit-style, JSX-style, etc).

I think that if our strength is that we adhere as much as possible to open standards (such as with Web Components), that it would be logical to use a very familiar JS-syntax: template literals

<h1>${title}</h1>
<div onclick="${myClickHandler}"></div>

It wouldn't be real template literals that are parsed by JS of course, just the syntax for added familiarity...

avickers commented 4 years ago

I stuck with data-bind rather than naming custom "directives" for every binding because I wanted to maximize interop with 3rd party WC libraries. IMO, using custom directives is akin to polluting the global scope in JS. I stuck with data-bind because I was fairly confident that nothing other than KO would use it. That said, the idea of a directive per binding has certainly crossed my mind, and I want to make sure that it's possible to create a plugin that accomplishes that easily. (Although, I could be persuaded to make it the official implementation and drop data-bind, as long as there's consensus and we're careful not to collide with any well known libraries.)

As for your example of using template literals, that is exactly how it works with Koc. You put your markup inside of the tagged template literal html`` and are expected to use real javascript expressions. When it is evaluated, it knows how to react to Observables, etc., and infer bindings. (Although, this can and will be enhanced.) This way it is possible to get JS in markup without violating CSP.

It seems like we have different views here. My view is that the simpliest and fastest way to handle these complex UIs was to move the HTML markup inside of single-file-component JS files and use real template literals to evaluate them along with real JS expressions.

You seem to have a preference to avoid this and instead implement a subset of javascript in raw HTML files via a complex and inevitably slower parser.

I don't really want to implement two complex parsers, so I guess we should settle whether it's preferable to move markup inside of JS or JS inside of HTML.

My preference is for the former. I feel like it's been made common practice by most modern frameworks; I believe it's far more powerful to have all of JS available rather than some subset; and, I believe it will prove much more performant.

avickers commented 4 years ago

I should point out that it's straightforward to configure modern IDEs to provide proper syntax highlighting for html/css inside of tagged template literals, and usually packages already exist for it.

export default class MyComp extends Koc {
  constructor() {
    super()
    const ko = this.ko()
    const vm = {
      title: ko.observable('Hello World'),
      spam: ko.observable(false)
    }

    // this will be highlighted as html
    this.html`<h1>${vm.title}</h1>
    <div>
      <input type="checkbox" name="spam" checked=${vm.spam}>
      <label for="spam">Send Spam?</label>
    </div>`

    ko.applyBindings(vm)
  }
}
Koc.register("my-comp", MyComp)

In the future, decorators would clean this up a little.

If we want to do complex conditional view logic, it's easy for me to implement in this case. For instance:

export default class MyComp extends Koc {
  constructor() {
    super()
    const ko = this.ko()
    const vm = {
      show: ko.observable('banana')
    }

    this.html`
    ~switch=${vm.show}
    ~case="banana"
    <p>Banana</p>
    ~/case
    ~/switch
    `

    ko.applyBindings(vm)
  }
}

or something. I don't know. We don't need to worry about it being valid HTML because it will get intercepted and replaced with valid binding syntax before being injected into the DOM.

html`
~if=${vm.bool}
~then
...
~/then
~else (optional)
...
~/else
~/if
`

Of course the syntax could be anything, or could hew more closely to KO's containerless syntax.

This is just a quick spitball.

avickers commented 4 years ago

So I'm thinking that the best way to approach adding an expressive binding engine to the raw HTML pages would probably be to follow the lead of Angular, Svelte, etc. and add a compile/build time step. I think that it would pay dividends to mirror JS syntax because what we could do is parse the HTML, extract the expressions, and automagically create and bind dependentObservables instead.

The quickest way to get started might be a Parcel/Webpack plugin, but I think ultimately it would need to be a pipeline agnostic CLI tool.

Handling it at build time rather than run time obviates all of the performance pressures and bypasses CSP issues.

karimayachi commented 4 years ago

Hmm, maybe Knockdown already does what I meant. I would just like to be able to bind to existing HTML in stead of wrapping it in a component (which may be nice in some cases, but would require an (unnecessary?) abstraction level in other cases). It doesn't have to be a complete parsing engine (with possible exception of boolean operators).

It seems Knockdown already does that, but I'm having a few problems with Knockdown in a TypeScript environment at the moment and haven't been able to spend much time on it, so I will get back on this!

Edit: compile time parsing may be a nice feature, but I don't yet see how we could do that without imposing a build pipeline

avickers commented 4 years ago

That might be because the *.d.ts is incomplete and quite likely also wrong in places. I've never written one before. I'm working on finishing it as I work through the documentation.

On Thu, Jun 25, 2020, 6:07 AM Karim Ayachi notifications@github.com wrote:

Hmm, maybe Knockdown already does what I meant. I would just like to be able to bind to existing HTML in stead of wrapping it in a component (which may be nice in some cases, but would require an (unnecessary?) abstraction level in other cases). It doesn't have to be a complete parsing engine (with possible exception of boolean operators https://github.com/karimayachi/karimayachi.github.io/issues/3#issuecomment-641226165 ).

It seems Knockdown already does that, but I'm having a few problems with Knockdown in a TypeScript environment at the moment and haven't been able to spend much time on it, so I will get back on this!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/karimayachi/karimayachi.github.io/issues/6#issuecomment-649471950, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMMI5WQI6DHM7GLIEKJ4KDRYMVXVANCNFSM4N6L2PUA .

karimayachi commented 4 years ago

I wouldn't bother for now. I'll work around it and make some suggestions in the progress. (such as using generics for the observables as KO does).. But all this for now isn't really important if only I can get it to work 😄

avickers commented 4 years ago

Looking at our respective approaches to binding engines, yours seems to be a top-down approach. Mine is more of a bottom-up approach.

Each brings pros and cons.

Google has set a goal of eliminating what they call Cumulative Layout Shift or CLS.

One thing that I have been wondering about with the top-down approach is what the ramifications are for CLS. The content in the HTML file will be rendered first. Then after KO/Imagine are initialized, they will iterate through the DOM and start to activate the bindings sequentially. This has the potential to produce a cascade of layout shifts.

With the bottom-up approach, the DOM can initially be built virtually in JS before being injected into the HTML all at once.

I did work out a way to accomplish a hybrid approach. It's clever, but it is probably a little too clever. I cannot help but feel that plenty of people would look at it and ask, "Why didn't you just do server side rendering?"

Does SSR make more sense for your objectives? NodeJS won't be restricted by CSP, so it can freely evaluate the expressions in HTML. It can build--and potentially cache--the entire DOM before pushing it over the wire, thus performing better on the Core Web Vitals. Plus, SSR has the benefit of being the de facto solution for 2020.

I can actually implement the hybrid approach in a small amount of JS and with a small performance impact; however, it brings with it a degree of opinionation and would likely necessitate a bundler plugin as a matter of DX.

I can pursue it further; however after producing a working prototype, I was distinctly left with the impression that maybe the handoff between our libraries should occur between the server and the client.

karimayachi commented 4 years ago

Looking at our respective approaches to binding engines, yours seems to be a top-down approach. Mine is more of a bottom-up approach.

I hadn't looked at your implementation yet, to start as unbiased as possible. But top-down was my intent. I want it to be that you don't need anything more than existing html (and css) to get started. No build chain, no tools, no cli, no dependencies (ok, MobX happened).

I think the things you mention are great for the advanced user, maybe in on opt-in fashion?

Plus, SSR has the benefit of being the de facto solution for 2020.

I still can't shake the feeling that the shift towards SSR is a workaround and not an intrinsic value and I still think CSR is the better solution (if properly dealt with performance, seo, etc). CSR is the (not only de facto) standard for almost every platform except Web.

Google has set a goal of eliminating what they call Cumulative Layout Shift or CLS.

Did not know about this. Top-down will bring some CLS along with it. Optimization en minimization of CLS should be a goal, but I'm not sure if for me it would be enough reason to move away from top down all together.

I can actually implement the hybrid approach in a small amount of JS and with a small performance impact; however, it brings with it a degree of opinionation and would likely necessitate a bundler plugin as a matter of DX.

I am very much interested on how you had visioned this hybrid solution!

I can pursue it further; however after producing a working prototype, I was distinctly left with the impression that maybe the handoff between our libraries should occur between the server and the client.

Do you mean that Imagine (top-down) should be the SSR module and Knockdown (bottom-up) should be the CSR module? Or the other way around? Or something else completely? Although I'm not a fan of SSR, I think bottom-up fits SSR better.

avickers commented 4 years ago

Did not know about this. Top-down will bring some CLS along with it. Optimization en minimization of CLS should be a goal, but I'm not sure if for me it would be enough reason to move away from top down all together.

I think it's going to be a very big challenge. The en vogue way to build websites is to have your HTML be a bunch of Jekyll like templates and then put all of the textual content into Markdown or JSON files to support i18n. i18n support is every bit as important as CSP for an enterprise grade solution.

Now, on the one hand, the Knockouty way of using bindings supports this sort of templating for CRS. The challenge is that it means that all of the content becomes dynamically rendered content. The other challenge becomes translation. You could use CSS to fix sizes for literally everything for only English, but different languages and different translators and you'll end up with fairly erratic whitespacing that would make the design team cry.

I realize that you don't think this is hugely important, but Google does. The Core Web Vitals won't only be shown in Dev Tools to developers. They will be shown in the Search Console to brand managers, and they will bring ranking penalties if you fail to optimize them. That means that brand managers will come yelling at the engineers if the CLS is above 0.1.

Plus, it will be something that industry thought leaders will use to continue to point out to CSR's inferiority.

I want it to be that you don't need anything more than existing html (and css) to get started. No build chain, no tools, no cli, no dependencies (ok, MobX happened).

Going back to the days of Knockout. I started out there.

Out of curiosity, how often does MobX introduce breaking changes? I found a roadmap for MobX 6. Looks like they aren't afraid to mix things up. For instance, they are dropping support for decorators.

This is a concern that I have with turning over such a critical--maybe the critical--piece of the framework to an external dependency. You never know what they are going to do or the timeline in which they will do it. They might make breaking changes that break your project. Eventually, for security or other reasons, you'll either need to embrace their changes or fork the project, in which case you have bought a massive amount of technical debt.

Few projects are like Knockout where they make backwards compatibility one of their guiding principles.

If MobX observables are used directly without additional abstraction, then breaking changes in MobX become breaking changes in Imagine.

I am very much interested on how you had visioned this hybrid solution!

Well, I tried to solve the KAG Theorem. It's like the CAP Theorem, only with Karim, Andrew, and Google. Nearly as difficult to solve, too. :smile:

So the solution that I came up with would work out something like this in practice.

index.html

<head>
  <link rel="stylesheet" href="bootstrap">
</head>
<body>
  <my-app></my-app>
  <script src="./index.js"></script>
</body>

index.js

import { Observable, Element, ViewModel } from 'knockdown'

import markup from './views/home.html'

class MyViewModel extends ViewModel {
  constructor() {
    super()
    this.name = new Observable('Karim')
  }
}

Element.define("my-app", markup, new MyViewModel(), {nonce: 'xYnWaoDw9dm1', useShadowDOM: false})

views/home.html

<h1>Hello, ${name}</h1>

I figured out a way to get an HTML file evaluated as JS in a secure way that CSP is OK with. It's total shenanigans, and people would not be amused if they knew how it worked. Using a bundler plugin or a server is pretty unavoidable because it relies on using nonce, and, while that could all be done manually, the DX would suffer without either an automated process.

The benefits to this approach is that you get to write your views in HTML. You get to use real JS template literal expressions in them because they will be evaluated using Knockdown's JS temp lit virtual DOM binding engine.

It would actually only require several dozen extra lines of code for Knockdown, and the performance cost appears to be quite small. It'll definitely be faster than iterating through child nodes in the DOM and adding a bunch of elements sequentially; moreover, because the DOM is built virtually and injected all at once, you end up avoiding the fight with Google over CLS. (Of course, if a server is used rather than a globally distributed CDN, then that brings performance implications and DevOps.)

So, it's nearly as fast as regular Knockdown and completely compatible with Knockdown because it uses all of the same internals. It simply abstracts away the part where you create the Component hence the Element interface.

Now, there are some limitations.

To do this in a truly secure way, the markup does all need to be loaded synchronously at the top of the module and the Elements should all be defined immediately. That's stricter than core Knockdown, but solves the KAG Theorem.

That means it might make more sense to use something like Express to do traditional routing. That's not all bad, though, because then Express can handle the nonce shenanigans and no bundler plugin would be required.

avickers commented 4 years ago

There's also another consideration with the top-down incremental approach, and that's how it interacts with spiders.

Google is somewhat good at handling JS and client side rendering, but other search engines are less invested because of the costs. The head of Bing has this to say, "Use as little Javascript as possible. Preferably, none."

When you construct the DOM sequentially in the client, how do the crawlers and indexers know when it's finished? They want to spend as little time as necessary on any given site.

There may be a risk of partial crawling/indexing. This was definitely a thing when CSR was in its infancy. I know that Google has strategies to deal with this. Their normal indexer is very resource starved, but they'll periodically render the site with a much longer lived indexer. This will pickup slower dynamic content, help them to detect cloaking, and help them to measure things like CLS caused by ads etc..

Anyway, this may or may not be a problem, but it's kind of why I think it might be helpful to move beyond toy apps to building something like Conduit using our libraries. Building and deploying a real world application that includes analytics libraries, a11y, i18n, a backend, etc. will allow for a meaningful evaluation of the practicality of CSR in 2020. Until we evaluate on these terms, this is mostly an academic exercise that is driven more by developer preferences than real world constraints.

We wouldn't want to spend the next 6 months hashing all this out and coming to consensus on everything only to discover that we've built a product that no one will be willing to use in production! And with this Covid nonsense going on, it's much more difficult to get feedback from userland.

karimayachi commented 4 years ago

KAG Theorem.. I like it!

You have given me (again) a lot to think about. Just a few quick thoughts:

MobX: it allowed me to get started quickly with Imagine and I was surprised by how complete it was and robust it seems. But I agree that replacing it with a self maintained implementation would be preferred.

Crawlers/SEO/spiders/Google: I never considered it important because the applications I usually work on are not public and are not crawled in any way. They sit safely behind login screens and the like. Other than the mentioned Conduit type applications I think the majority of Enterprise applications fit this (non public) bill. (can't back this up with numbers) But I have to say, it's a blind spot for me and I'm starting to realize the ramifications of ignoring all this.

Anyway, as I'm thinking everything through, I will continue working on Imagine. Certainly not because I don't value your objections, and certainly not because I disagree, but because I'm afraid to end up in crippling apathy. I'd rather keep moving. If that means that I have to refactor 20 times, or even throw the whole thing away: so be it. At least it's providing me some valuable (I hope) insights 😄

avickers commented 4 years ago

Crawlers/SEO/spiders/Google: I never considered it important because the applications I usually work on are not public and are not crawled in any way. They sit safely behind login screens and the like. Other than the mentioned Conduit type applications I think the majority of Enterprise applications fit this (non public) bill. (can't back this up with numbers)

That's a really interesting point, and it's true that the times I've been able to use Knockout in recent years were on internal/Electron projects.

I can see how it might be the case that worrying less about public web facing stuff might allow for some competitive advantages for other projects.

I think there are two things to consider. One is the actual marketshare in production. The other is the marketshare of stages at conferences. Now, my experiences might be heavily biased, but most of the talks that I see tend to focus on the broadest possible deployment environments. Would we find the same opportunities to promote the project if we eschew the public web for a more specialized product?

Of course, I'm not sure anyone even knows what the market for tech conferences will never look like in two years. Many have probably been shuttered for good. Will new ones emerge or will online conferences become the norm, such that there are fewer, more prestigious conferences overall? The US airline industry isn't treating this like a cyclical event. Their assumptions are that the industry has fundamentally changed and that many of the full freight business travelers that they rely upon will not be returning.

I'm not sure whether all of this would work to our advantage or disadvantage. I think it's certainly a lot easier to agree to present at a conference for an unsponsored project when you can speak from your living room. :smile: The audiences could also be much larger, but competition for those spots might become more fierce and the conferences might want you to speak to broader audiences. I'm really not sure.

Also, maybe being cooped up with fewer distractions has given people more time to consider the pet peeves they have with their current frameworks and tools, and provide both more willingness and time to consider alternatives.

These are things I've been ruminating on on the side, as I look forward to how we might actually go about "launching" the eventual project. I feel like we might want to approach it not unlike a real business where, after maybe some soft launches at smaller local events, we try to hit up a big conference. Having a lot of people poking around all at once makes it a lot more likely to achieve a sustainable critical mass of a community.

Anyway, as I'm thinking everything through, I will continue working on Imagine. Certainly not because I don't value your objections, and certainly not because I disagree, but because I'm afraid to end up in crippling apathy. I'd rather keep moving. If that means that I have to refactor 20 times, or even throw the whole thing away: so be it. At least it's providing me some valuable (I hope) insights smile

I think that's definitely the right thing to do anyway. You know, if nothing else, Imagine captures the simplicity and approachability of Knockout, which was an excellent tool for beginners. It could be a tool that could make web development beyond simple HTML accessible as part of general education for youngsters or for hobbyists.

I'd say that that's definitely not the case for the composition driven, CLI tool heavy, SSR approach to web dev! While web dev have become a lot more powerful in the last 5 or 6 years, it's also become less accessible in some ways.